Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Periodic e2e test for workload cluster upgrade from v1.21 to latest are failing #4462

Closed
fabriziopandini opened this issue Apr 12, 2021 · 6 comments
Assignees
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test.

Comments

@fabriziopandini
Copy link
Member

Looking at https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api#capi-e2e-main-1-21-latest&width=20 those test are consistently failing

When upgrading a workload cluster and testing K8S conformance [Conformance] [K8s-Upgrade]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:27
  Should create and upgrade a workload cluster and run kubetest [It]
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:82
  Timed out after 900.003s.
  Expected
      <bool>: false
  to be true
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/deployment_helpers.go:248
  Full Stack Trace
  sigs.k8s.io/cluster-api/test/framework.WaitForDNSUpgrade(0x211d800, 0xc0000440d8, 0x7f6bc8430ee0, 0xc00002a150, 0xc00004221b, 0x5, 0x0, 0x0, 0x0)
  	/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/deployment_helpers.go:248 +0x13c
  sigs.k8s.io/cluster-api/test/framework.UpgradeControlPlaneAndWaitForUpgrade(0x211d800, 0xc0000440d8, 0x2134810, 0xc000ba2200, 0xc00059e340, 0xc00084d800, 0xc00004c05e, 0x22, 0xc000042158, 0x8, ...)
  	/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlplane_helpers.go:335 +0xe13
  sigs.k8s.io/cluster-api/test/e2e.ClusterUpgradeConformanceSpec.func2()
  	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:108 +0xa5b
  github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000401380, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/leafnodes/runner.go:113 +0xa3
  github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000401380, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/leafnodes/runner.go:64 +0x15c
  github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0003fcc20, 0x20e0f20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/leafnodes/it_node.go:26 +0x87
  github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc000502c30, 0x0, 0x20e0f20, 0xc000054880)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/spec/spec.go:215 +0x72f
  github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc000502c30, 0x20e0f20, 0xc000054880)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/spec/spec.go:138 +0xf2
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0000e6420, 0xc000502c30, 0x0)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/specrunner/spec_runner.go:200 +0x111
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0000e6420, 0x1)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/specrunner/spec_runner.go:170 +0x147
  github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0000e6420, 0xc000044470)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/specrunner/spec_runner.go:66 +0x117
  github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0000fc070, 0x7f6bca897d38, 0xc00050f800, 0x1e80cf7, 0x8, 0xc0003fcb00, 0x2, 0x2, 0x2125e58, 0xc000054880, ...)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/internal/suite/suite.go:79 +0x546
  github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x20e25e0, 0xc00050f800, 0x1e80cf7, 0x8, 0xc0003fcae0, 0x2, 0x2, 0x2)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/ginkgo_dsl.go:229 +0x218
  github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x20e25e0, 0xc00050f800, 0x1e80cf7, 0x8, 0xc00011df60, 0x1, 0x1, 0x60708fc4)
  	/home/prow/go/pkg/mod/github.com/onsi/ginkgo@v1.15.2/ginkgo_dsl.go:217 +0xad
  sigs.k8s.io/cluster-api/test/e2e.TestE2E(0xc00050f800)
  	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:86 +0x136
  testing.tRunner(0xc00050f800, 0x1f55150)
  	/usr/local/go/src/testing/testing.go:1194 +0xef
  created by testing.(*T).Run
  	/usr/local/go/src/testing/testing.go:1239 +0x2b3
------------------------------
STEP: Tearing down the management cluster
Summarizing 1 Failure:
[Fail] When upgrading a workload cluster and testing K8S conformance [Conformance] [K8s-Upgrade] [It] Should create and upgrade a workload cluster and run kubetest 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/deployment_helpers.go:248

/kind failing-test
@srm09
@sbueringer

@k8s-ci-robot k8s-ci-robot added the kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. label Apr 12, 2021
@sbueringer
Copy link
Member

I'll take a look.

There are a few open issues regarding 1.21 with kind, so we might have to wait until the next kind release (I think current estimation is next week):

I think they are not related, but who knows :)

/assign

@sbueringer
Copy link
Member

Tried to run the test locally (with kind 0.10.0) and I definitely got: kubernetes-sigs/kind#2189

Not sure why the e2e test in Prow only fails a bit later. But as we want to update kind anyway pretty soon. I'll open a PoC-PR with a main-kind version to check if the tests will work with the upcoming kind version.

I would probably not invest too much to find out why the test fails differently with kind 0.9.0 vs kind 0.10.0

@sbueringer
Copy link
Member

First green run: https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-workload-upgrade-1-21-latest-main

@sbueringer
Copy link
Member

2 in a row.

@fabriziopandini should we close the issue?

@fabriziopandini
Copy link
Member Author

/close
Thanks @sbueringer for the great work here!

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

/close
Thanks @sbueringer for the great work here!

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test.
Projects
None yet
Development

No branches or pull requests

3 participants