Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass conformance test #137

Closed
k82cn opened this issue May 7, 2019 · 27 comments
Closed

Pass conformance test #137

k82cn opened this issue May 7, 2019 · 27 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/high
Milestone

Comments

@k82cn
Copy link
Member

k82cn commented May 7, 2019

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

Description:

Cherry pick related PR in kube-batch to volcano-sh/kube-batch for conformance test.

/cc @asifdxtreme

@k82cn
Copy link
Member Author

k82cn commented May 7, 2019

/priority high
/kind feature

@volcano-sh-bot volcano-sh-bot added priority/high kind/feature Categorizes issue or PR as related to a new feature. labels May 7, 2019
@k82cn k82cn added this to the v0.1 milestone May 7, 2019
@k82cn
Copy link
Member Author

k82cn commented May 8, 2019

/assign @asifdxtreme

@k82cn
Copy link
Member Author

k82cn commented May 8, 2019

we need to get it done in v0.1

@asifdxtreme
Copy link
Contributor

cc @shivramsrivastava

@shivramsrivastava
Copy link
Contributor

I have the following queries.

  1. Should we test it now or after the v0.1 is release.
  2. If we need to test it before v0.1 release have all the targeted PR for v0.1 already merged?
  3. The k8s conformance test will only create tests with simple Pods/Replicate-set/job and other simple k8s type, so we will be testing only the scheduler code with conformance test and wont be able to test the job controller,admission controller the cli-tools of volcano.

@asifdxtreme
Copy link
Contributor

I have the following queries.

  1. Should we test it now or after the v0.1 is release.
  2. If we need to test it before v0.1 release have all the targeted PR for v0.1 already merged?
  3. The k8s conformance test will only create tests with simple Pods/Replicate-set/job and other simple k8s type, so we will be testing only the scheduler code with conformance test and wont be able to test the job controller,admission controller the cli-tools of volcano.
  1. We can run the test before we go for v0.1 release
  2. We can go ahead with what is there now in master branch
  3. @k82cn Can you add you insights on this

@k82cn
Copy link
Member Author

k82cn commented May 9, 2019

The k8s conformance test will only create tests with simple Pods/Replicate-set/job and other simple k8s type, so we will be testing only the scheduler code with conformance test and wont be able to test the job controller,admission controller the cli-tools of volcano.

This part is ok to me: only test scheduler; but the binaries/images should be built from volcano repo :)

@asifdxtreme asifdxtreme modified the milestones: v0.1, v0.2 May 13, 2019
@k82cn
Copy link
Member Author

k82cn commented May 15, 2019

what's current status?

@shivramsrivastava
Copy link
Contributor

I ran it against the local kind cluster like i did before for the kube-batch, faced some crash issues with it.
Not able to complete the test also the crash were random.
Now i am trying with 'kubetest' and a gce cluster, because i think could be some issues with my local machine.

@k82cn
Copy link
Member Author

k82cn commented May 15, 2019

that's great, thanks :)

@shivramsrivastava
Copy link
Contributor

These below test cases are failing.

Summarizing 8 Failures:

[Fail] [sig-apps] Daemon set [Serial] [It] should run and stop simple daemon [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:131

[Fail] [sig-api-machinery] Aggregator [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:391

[Fail] [sig-apps] Daemon set [Serial] [It] should retry creating failed daemon pods [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:257

[Fail] [k8s.io] [sig-node] Events [It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/events.go:96

[Fail] [sig-apps] Daemon set [Serial] [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:338

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:396

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching  [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716

Ran 236 of 3966 Specs in 10893.107 seconds
FAIL! -- 228 Passed | 8 Failed | 0 Pending | 3730 Skipped
--- FAIL: TestE2E (10893.19s)
FAIL

@shivramsrivastava
Copy link
Contributor

These three test cases are related to the events fix, which is available in 'kube-batch' code, but it looks like its not vendored in this project.
With that fix these three test cases will pass.

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching  [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716

[Fail] [k8s.io] [sig-node] Events [It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/events.go:96

This below test case fail even with default scheduler also, even with 'kube-batch' hence we can ignore this for now and investigate later.

[Fail] [sig-api-machinery] Aggregator [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:391

@shivramsrivastava
Copy link
Contributor

These are new failures and are related to Daemon set which was not there when we ran earlier with the 'kube-batch'. Will investigate these, mostly looks like flaky one or something not right with the cluster.

[Fail] [sig-apps] Daemon set [Serial] [It] should run and stop simple daemon [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:131

[Fail] [sig-apps] Daemon set [Serial] [It] should retry creating failed daemon pods [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:257

[Fail] [sig-apps] Daemon set [Serial] [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:338

[Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] 
/home/ubuntu/newdev/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:396

@k82cn
Copy link
Member Author

k82cn commented May 16, 2019

These three test cases are related to the events fix, which is available in 'kube-batch' code, but it looks like its not vendored in this project.

Please help to cherry pick them :)

@k82cn
Copy link
Member Author

k82cn commented May 31, 2019

what's the status of this? Can we pass conformance test now?

@shivramsrivastava
Copy link
Contributor

I ran the tests with local kind cluster all were passing.
I tested it with a gce cluster setup too, because this initial tests were on a GCE cluster setup with kubetest.
Here are the results of the failing [sig-apps] Daemon-sets tests cases.

Test case one:

$ ./e2e.test --provider=local --ginkgo.focus="should update pod when spec was updated and update strategy is RollingUpdate" --kubeconfig=/home/root1/.kube/config 2>&1  | tee resource_limit_e2e_conformance.log
I0605 22:13:55.864435    9318 e2e.go:241] Starting e2e run "d405ccf9-9540-4260-afd2-c700ac6474dc" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1559753035 - Will randomize all specs
Will run 1 of 4412 specs

Jun  5 22:13:55.981: INFO: >>> kubeConfig: /home/root1/.kube/config
Jun  5 22:13:55.983: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  5 22:13:57.738: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  5 22:13:58.979: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (1 seconds elapsed)
Jun  5 22:13:58.979: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Jun  5 22:13:58.979: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jun  5 22:13:59.285: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'fluentd-gcp-v3.2.0' (0 seconds elapsed)
Jun  5 22:13:59.285: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (0 seconds elapsed)
Jun  5 22:13:59.285: INFO: e2e test version: v1.15.0-alpha.1.2013+88f8c785b45d53
Jun  5 22:13:59.579: INFO: kube-apiserver version: v1.15.0-alpha.1.2013+88f8c785b45d53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun  5 22:13:59.592: INFO: >>> kubeConfig: /home/root1/.kube/config
STEP: Building a namespace api object, basename daemonsets
Jun  5 22:14:00.673: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jun  5 22:14:02.651: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jun  5 22:14:03.200: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:14:03.470: INFO: Number of nodes with available pods: 0
Jun  5 22:14:03.470: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:14:04.802: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:14:05.109: INFO: Number of nodes with available pods: 0
Jun  5 22:14:05.109: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:14:05.724: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:14:06.030: INFO: Number of nodes with available pods: 0
Jun  5 22:14:06.030: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:14:06.713: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:14:06.953: INFO: Number of nodes with available pods: 2
Jun  5 22:14:06.953: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:14:07.772: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:14:08.012: INFO: Number of nodes with available pods: 3
Jun  5 22:14:08.012: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jun  5 22:14:10.060: INFO: Wrong image for pod: daemon-set-r8dsp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun  5 22:14:10.060: INFO: Wrong image for pod: daemon-set-r9h96. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun  5 22:14:10.060: INFO: Wrong image for pod: daemon-set-tg85c. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jun  5 22:14:10.337: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/.
.
.
.

Jun  5 22:14:37.058: INFO: Number of nodes with available pods: 3
Jun  5 22:14:37.058: INFO: Number of running nodes: 3, number of available pods: 3
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3107, will wait for the garbage collector to delete the pods
Jun  5 22:14:39.515: INFO: Deleting DaemonSet.extensions daemon-set took: 314.021056ms
Jun  5 22:14:40.215: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.324275ms
Jun  5 22:14:46.888: INFO: Number of nodes with available pods: 0
Jun  5 22:14:46.888: INFO: Number of running nodes: 0, number of available pods: 0
Jun  5 22:14:47.195: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3107/daemonsets","resourceVersion":"8103"},"items":null}

Jun  5 22:14:47.440: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3107/pods","resourceVersion":"8103"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun  5 22:14:48.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3107" for this suite.
Jun  5 22:14:55.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun  5 22:15:06.959: INFO: namespace daemonsets-3107 deletion completed in 18.226293242s

• [SLOW TEST:67.367 seconds]
[sig-apps] Daemon set [Serial]
/home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun  5 22:15:06.991: INFO: Running AfterSuite actions on all nodes
Jun  5 22:15:06.991: INFO: Running AfterSuite actions on node 1

Ran 1 of 4412 Specs in 71.014 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4411 Skipped
PASS


@shivramsrivastava
Copy link
Contributor

Test case two:
$ ./e2e.test --provider=local --ginkgo.focus="should run and stop simple daemon" --kubeconfig=/home/root1/.kube/config 2>&1  | tee resource_limit_e2e_conformance.log
I0605 22:09:37.401260    9179 e2e.go:241] Starting e2e run "b5cc77e0-c09e-41e2-83c2-4840443aa7d5" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1559752776 - Will randomize all specs
Will run 1 of 4412 specs

Jun  5 22:09:37.492: INFO: >>> kubeConfig: /home/root1/.kube/config
Jun  5 22:09:37.494: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  5 22:09:41.017: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  5 22:09:43.416: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Jun  5 22:09:43.416: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Jun  5 22:09:43.416: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jun  5 22:09:43.994: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'fluentd-gcp-v3.2.0' (0 seconds elapsed)
Jun  5 22:09:43.994: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (0 seconds elapsed)
Jun  5 22:09:43.994: INFO: e2e test version: v1.15.0-alpha.1.2013+88f8c785b45d53
Jun  5 22:09:44.599: INFO: kube-apiserver version: v1.15.0-alpha.1.2013+88f8c785b45d53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun  5 22:09:44.632: INFO: >>> kubeConfig: /home/root1/.kube/config
STEP: Building a namespace api object, basename daemonsets
Jun  5 22:09:46.845: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun  5 22:09:52.314: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:09:52.895: INFO: Number of nodes with available pods: 0
Jun  5 22:09:52.895: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:09:54.450: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:09:55.033: INFO: Number of nodes with available pods: 0
Jun  5 22:09:55.033: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:09:56.413: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:09:56.990: INFO: Number of nodes with available pods: 3
Jun  5 22:09:56.990: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jun  5 22:09:59.448: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:10:00.062: INFO: Number of nodes with available pods: 2
Jun  5 22:10:00.062: INFO: Node e2e-test-root1-minion-group-jk64 is running more than one daemon pod
Jun  5 22:10:01.702: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:10:02.198: INFO: Number of nodes with available pods: 2
Jun  5 22:10:02.198: INFO: Node e2e-test-root1-minion-group-jk64 is running more than one daemon pod
Jun  5 22:10:03.589: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:10:04.118: INFO: Number of nodes with available pods: 3
Jun  5 22:10:04.118: INFO: Number of running nodes: 3, number of available pods: 3
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4481, will wait for the garbage collector to delete the pods
Jun  5 22:10:06.456: INFO: Deleting DaemonSet.extensions daemon-set took: 521.585656ms
Jun  5 22:10:07.157: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.319506ms
Jun  5 22:10:17.370: INFO: Number of nodes with available pods: 0
Jun  5 22:10:17.370: INFO: Number of running nodes: 0, number of available pods: 0
Jun  5 22:10:17.910: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4481/daemonsets","resourceVersion":"7533"},"items":null}

Jun  5 22:10:18.495: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4481/pods","resourceVersion":"7535"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun  5 22:10:20.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4481" for this suite.
Jun  5 22:10:28.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun  5 22:10:49.992: INFO: namespace daemonsets-4481 deletion completed in 28.732082672s

• [SLOW TEST:65.360 seconds]
[sig-apps] Daemon set [Serial]
/home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSJun  5 22:10:49.992: INFO: Running AfterSuite actions on all nodes
Jun  5 22:10:49.992: INFO: Running AfterSuite actions on node 1

Ran 1 of 4412 Specs in 72.512 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4411 Skipped
PASS

@shivramsrivastava
Copy link
Contributor

Test case three:

$ ./e2e.test --provider=local --ginkgo.focus="should retry creating failed daemon pods" --kubeconfig=/home/root1/.kube/config 2>&1  | tee resource_limit_e2e_conformance.log
I0605 22:12:09.227528    9249 e2e.go:241] Starting e2e run "aa2118b6-1d4e-462d-adb3-c0e4c5b3a292" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1559752928 - Will randomize all specs
Will run 1 of 4412 specs

Jun  5 22:12:09.322: INFO: >>> kubeConfig: /home/root1/.kube/config
Jun  5 22:12:09.325: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  5 22:12:11.855: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  5 22:12:13.442: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (1 seconds elapsed)
Jun  5 22:12:13.442: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Jun  5 22:12:13.442: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jun  5 22:12:13.913: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'fluentd-gcp-v3.2.0' (0 seconds elapsed)
Jun  5 22:12:13.913: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (0 seconds elapsed)
Jun  5 22:12:13.913: INFO: e2e test version: v1.15.0-alpha.1.2013+88f8c785b45d53
Jun  5 22:12:14.310: INFO: kube-apiserver version: v1.15.0-alpha.1.2013+88f8c785b45d53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun  5 22:12:14.336: INFO: >>> kubeConfig: /home/root1/.kube/config
STEP: Building a namespace api object, basename daemonsets
Jun  5 22:12:16.153: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun  5 22:12:21.786: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:22.093: INFO: Number of nodes with available pods: 0
Jun  5 22:12:22.093: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:12:23.424: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:23.666: INFO: Number of nodes with available pods: 1
Jun  5 22:12:23.666: INFO: Node e2e-test-root1-minion-group-jk64 is running more than one daemon pod
Jun  5 22:12:24.347: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:24.653: INFO: Number of nodes with available pods: 1
Jun  5 22:12:24.653: INFO: Node e2e-test-root1-minion-group-jk64 is running more than one daemon pod
Jun  5 22:12:25.370: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:25.613: INFO: Number of nodes with available pods: 3
Jun  5 22:12:25.613: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jun  5 22:12:26.701: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:27.008: INFO: Number of nodes with available pods: 2
Jun  5 22:12:27.008: INFO: Node e2e-test-root1-minion-group-tg63 is running more than one daemon pod
Jun  5 22:12:28.339: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:28.647: INFO: Number of nodes with available pods: 2
Jun  5 22:12:28.647: INFO: Node e2e-test-root1-minion-group-tg63 is running more than one daemon pod
Jun  5 22:12:29.261: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:29.568: INFO: Number of nodes with available pods: 2
Jun  5 22:12:29.568: INFO: Node e2e-test-root1-minion-group-tg63 is running more than one daemon pod
Jun  5 22:12:30.285: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:12:30.593: INFO: Number of nodes with available pods: 3
Jun  5 22:12:30.593: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7159, will wait for the garbage collector to delete the pods
Jun  5 22:12:32.025: INFO: Deleting DaemonSet.extensions daemon-set took: 261.698928ms
Jun  5 22:12:32.926: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.209579ms
Jun  5 22:12:44.211: INFO: Number of nodes with available pods: 0
Jun  5 22:12:44.211: INFO: Number of running nodes: 0, number of available pods: 0
Jun  5 22:12:44.455: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7159/daemonsets","resourceVersion":"7823"},"items":null}

Jun  5 22:12:44.696: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7159/pods","resourceVersion":"7824"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun  5 22:12:45.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7159" for this suite.
Jun  5 22:12:52.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun  5 22:13:04.282: INFO: namespace daemonsets-7159 deletion completed in 18.226294433s

• [SLOW TEST:49.946 seconds]
[sig-apps] Daemon set [Serial]
/home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun  5 22:13:04.297: INFO: Running AfterSuite actions on all nodes
Jun  5 22:13:04.297: INFO: Running AfterSuite actions on node 1

Ran 1 of 4412 Specs in 54.981 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4411 Skipped
PASS

@shivramsrivastava
Copy link
Contributor

Test case four:

$ ./e2e.test --provider=local --ginkgo.focus="should rollback without unnecessary restarts" --kubeconfig=/home/root1/.kube/config 2>&1  | tee resource_limit_e2e_conformance.log
I0605 22:16:25.493364    9357 e2e.go:241] Starting e2e run "c500184d-2568-4786-880c-bb464520b9b2" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1559753184 - Will randomize all specs
Will run 1 of 4412 specs

Jun  5 22:16:25.587: INFO: >>> kubeConfig: /home/root1/.kube/config
Jun  5 22:16:25.589: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  5 22:16:27.448: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  5 22:16:28.516: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (1 seconds elapsed)
Jun  5 22:16:28.516: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Jun  5 22:16:28.516: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jun  5 22:16:28.790: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'fluentd-gcp-v3.2.0' (0 seconds elapsed)
Jun  5 22:16:28.790: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (0 seconds elapsed)
Jun  5 22:16:28.790: INFO: e2e test version: v1.15.0-alpha.1.2013+88f8c785b45d53
Jun  5 22:16:29.084: INFO: kube-apiserver version: v1.15.0-alpha.1.2013+88f8c785b45d53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun  5 22:16:29.098: INFO: >>> kubeConfig: /home/root1/.kube/config
STEP: Building a namespace api object, basename daemonsets
Jun  5 22:16:30.313: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jun  5 22:16:32.465: INFO: Create a RollingUpdate DaemonSet
Jun  5 22:16:32.771: INFO: Check that daemon pods launch on every node of the cluster
Jun  5 22:16:33.079: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:16:33.385: INFO: Number of nodes with available pods: 0
Jun  5 22:16:33.386: INFO: Node e2e-test-root1-minion-group-dp09 is running more than one daemon pod
Jun  5 22:16:34.628: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:16:34.921: INFO: Number of nodes with available pods: 3
Jun  5 22:16:34.921: INFO: Number of running nodes: 3, number of available pods: 3
Jun  5 22:16:34.921: INFO: Update the DaemonSet to trigger a rollout
Jun  5 22:16:35.476: INFO: Updating DaemonSet daemon-set
Jun  5 22:16:39.567: INFO: Roll back the DaemonSet before rollout is complete
Jun  5 22:16:40.093: INFO: Updating DaemonSet daemon-set
Jun  5 22:16:40.093: INFO: Make sure DaemonSet rollback is complete
Jun  5 22:16:40.349: INFO: Wrong image for pod: daemon-set-fgbhx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jun  5 22:16:40.349: INFO: Pod daemon-set-fgbhx is not available
Jun  5 22:16:40.657: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:16:41.987: INFO: Wrong image for pod: daemon-set-fgbhx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jun  5 22:16:41.987: INFO: Pod daemon-set-fgbhx is not available
Jun  5 22:16:42.295: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:16:42.898: INFO: Pod daemon-set-p4gzb is not available
Jun  5 22:16:43.217: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  5 22:16:43.932: INFO: Pod daemon-set-p4gzb is not available
Jun  5 22:16:44.241: INFO: DaemonSet pods can't tolerate node e2e-test-root1-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8247, will wait for the garbage collector to delete the pods
Jun  5 22:16:45.755: INFO: Deleting DaemonSet.extensions daemon-set took: 243.821114ms
Jun  5 22:16:46.455: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.236761ms
Jun  5 22:16:54.204: INFO: Number of nodes with available pods: 0
Jun  5 22:16:54.204: INFO: Number of running nodes: 0, number of available pods: 0
Jun  5 22:16:54.483: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8247/daemonsets","resourceVersion":"8379"},"items":null}

Jun  5 22:16:54.787: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8247/pods","resourceVersion":"8380"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun  5 22:16:56.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8247" for this suite.
Jun  5 22:17:03.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun  5 22:17:14.550: INFO: namespace daemonsets-8247 deletion completed in 18.226412469s

• [SLOW TEST:45.452 seconds]
[sig-apps] Daemon set [Serial]
/home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun  5 22:17:14.575: INFO: Running AfterSuite actions on all nodes
Jun  5 22:17:14.575: INFO: Running AfterSuite actions on node 1

Ran 1 of 4412 Specs in 48.992 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4411 Skipped
PASS

@shivramsrivastava
Copy link
Contributor

Even the test case which was failing in kube-batch is now passing when run manually.

$ ./e2e.test --provider=local --ginkgo.focus="Should be able to support the 1.10 Sample API Server using the current Aggregator" --kubeconfig=/home/root1/.kube/config 2>&1  | tee resource_limit_e2e_conformance.log
I0605 22:28:13.735897    9646 e2e.go:241] Starting e2e run "75148c8e-5ccc-46cf-ba4f-0751d8dc620c" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1559753892 - Will randomize all specs
Will run 1 of 4412 specs

Jun  5 22:28:13.853: INFO: >>> kubeConfig: /home/root1/.kube/config
Jun  5 22:28:13.855: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  5 22:28:15.491: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  5 22:28:16.692: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (1 seconds elapsed)
Jun  5 22:28:16.692: INFO: expected 9 pod replicas in namespace 'kube-system', 9 are Running and Ready.
Jun  5 22:28:16.692: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Jun  5 22:28:16.990: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'fluentd-gcp-v3.2.0' (0 seconds elapsed)
Jun  5 22:28:16.990: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (0 seconds elapsed)
Jun  5 22:28:16.990: INFO: e2e test version: v1.15.0-alpha.1.2013+88f8c785b45d53
Jun  5 22:28:17.289: INFO: kube-apiserver version: v1.15.0-alpha.1.2013+88f8c785b45d53
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jun  5 22:28:17.307: INFO: >>> kubeConfig: /home/root1/.kube/config
STEP: Building a namespace api object, basename aggregator
Jun  5 22:28:18.463: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jun  5 22:28:18.723: INFO: >>> kubeConfig: /home/root1/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jun  5 22:28:22.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 22:28:25.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 22:28:27.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 22:28:29.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 22:28:31.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63695350700, loc:(*time.Location)(0x80a0580)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 22:28:35.517: INFO: Waited 2.045700528s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jun  5 22:28:45.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2382" for this suite.
Jun  5 22:28:52.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun  5 22:29:03.551: INFO: namespace aggregator-2382 deletion completed in 18.202215026s

• [SLOW TEST:46.244 seconds]
[sig-api-machinery] Aggregator
/home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /home/root1/kubebapath/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun  5 22:29:03.584: INFO: Running AfterSuite actions on all nodes
Jun  5 22:29:03.584: INFO: Running AfterSuite actions on node 1

Ran 1 of 4412 Specs in 49.737 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4411 Skipped
PASS

@shivramsrivastava
Copy link
Contributor

I have manually cherry-picked the event fixes which was done in kube-batch, after which the three test case which were failing due to events are passing now.

This is now cherry-picks with these changes refer fix1 ,fix2

@k82cn
Copy link
Member Author

k82cn commented Jun 9, 2019

The PR are merged, please bump to volcano. We need to get this done ASAP!

@shivramsrivastava
Copy link
Contributor

Will close this issue as the k8s conformance test cases are passing.

Method of testing:

  • Created a GCE cluster with the latest k8s code using kubetest, ( please refer the commit hash
    below).
  • Removed the default-scheduler from the cluster.
  • Ran volcano-sh/scheduler with this the flag 'scheduler-name=default-scheduler'

k8s git commit with which this was tested.

$ git log -n 1
commit 88f8c785b45d533fef42c1758f57f4de8c5404b1 (HEAD -> master, origin/master, origin/HEAD)
Merge: 714fcd910f 89a7a35607
Author: Kubernetes Prow Robot <k8s-ci-robot@users.noreply.github.com>
Date:   Tue Jun 4 22:31:52 2019 -0700

Volcano commit hash:

$ git log -n 1
commit a20f3be17e021781c70e58244064a6381bfce3de (HEAD -> master, origin/master, origin/HEAD)
Merge: ec144dd b21573e
Author: Volcano Bot <49986348+volcano-sh-bot@users.noreply.github.com>
Date:   Tue Jun 4 16:01:21 2019 +0800

kubectl version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.1.2013+88f8c785b45d53", GitCommit:"88f8c785b45d533fef42c1758f57f4de8c5404b1", GitTreeState:"clean", BuildDate:"2019-06-05T15:17:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

@shivramsrivastava
Copy link
Contributor

/assign

@shivramsrivastava
Copy link
Contributor

/close

@shivramsrivastava
Copy link
Contributor

@k82cn
Please let me know if this require any more information.

@k82cn
Copy link
Member Author

k82cn commented Jun 17, 2019

That's great, thanks :)

@k82cn k82cn closed this as completed Jun 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/high
Projects
None yet
Development

No branches or pull requests

4 participants