Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Old release branch test cleaning #18509

Merged

Conversation

tpepper
Copy link
Member

@tpepper tpepper commented Jul 28, 2020

Following today's SIG Testing meeting on cleaning up test noise I was looking at some items in http://storage.googleapis.com/k8s-metrics/failures-latest.json and adjacent test configs. The PR is an attempt to start cleaning out things that were at most relevant back at times when we were dealing with releases which are no longer community supported. This obscures current work.

Signed-off-by: Tim Pepper tpepper@vmware.com

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jul 28, 2020
@k8s-ci-robot k8s-ci-robot requested review from chewong and figo July 28, 2020 19:30
@k8s-ci-robot k8s-ci-robot added area/config Issues or PRs related to code in /config area/jobs area/provider/aws Issues or PRs related to aws provider area/provider/azure Issues or PRs related to azure provider area/provider/vmware Issues or PRs related to vmware provider area/testgrid sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels Jul 28, 2020
@tpepper tpepper force-pushed the old_release_branch_test_cleaning branch from 9a8cea5 to 3784efa Compare July 28, 2020 19:41
@tpepper
Copy link
Member Author

tpepper commented Jul 28, 2020

/retest

@tpepper tpepper force-pushed the old_release_branch_test_cleaning branch 2 times, most recently from 7c52545 to 4f3d3c2 Compare July 28, 2020 20:24
@tpepper
Copy link
Member Author

tpepper commented Jul 28, 2020

/assign @dims
for a start since I touch a buuuunch of lines of his yaml

@tpepper
Copy link
Member Author

tpepper commented Jul 28, 2020

@BenTheElder curious if you think I broek anything for Kind

@dims
Copy link
Member

dims commented Jul 28, 2020

/approve

@dims
Copy link
Member

dims commented Jul 28, 2020

thanks for doing this @tpepper

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 28, 2020
@tpepper
Copy link
Member Author

tpepper commented Jul 29, 2020

/assign @BenTheElder

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 29, 2020
@tpepper tpepper force-pushed the old_release_branch_test_cleaning branch from 4f3d3c2 to 13afa45 Compare July 29, 2020 18:28
@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jul 29, 2020
@tpepper tpepper force-pushed the old_release_branch_test_cleaning branch from 13afa45 to 1e6f456 Compare July 30, 2020 20:15
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 30, 2020
Copy link
Member

@spiffxp spiffxp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be fine with this if you save 978458a (the cri-containerd jobs) for a different PR

config/jobs/kubernetes/sig-node/containerd.yaml Outdated Show resolved Hide resolved
Comment on lines -259 to -260
- --repo=k8s.io/kubernetes=release-1.15
- --repo=github.com/containerd/cri=release/1.2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does cri 1.2 not work with later releases of kubernetes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might make sense to keep if there is somebody curating it. Perhaps it is a naive sign of me seeing a test of this odd combination (which mostly passes) and then jumping to an assumption of non-value. There might be value in a test called "containerd-node-conformance" if that is testing a meaningful variation. I don't understand what is this "1.2" thing though as there's no such tag on containerd/cri although it is a branch, which hasn't seen commits since January. There's a 1.3 branch active to March. And a 1.11 branch active through last year. Guess this isn't semver.

I would propose these old variations be dropped and an issue opened for a test plan of containerd variations. Based on curl http://storage.googleapis.com/k8s-metrics/failures-latest.json | grep -A1 containerd | grep -v ^--, it seems there's a gap in curation on things containerd:

  "ci-cri-containerd-e2e-gce-stackdriver": {
    "failing_days": 889
  "ci-cri-containerd-e2e-gci-gce-sd-logging": {
    "failing_days": 856
  "ci-cri-containerd-e2e-gci-gce-sd-logging-k8s-resources": {
    "failing_days": 847
  "ci-cri-containerd-e2e-gci-gce-statefulset": {
    "failing_days": 808
  "ci-cri-containerd-node-e2e-serial": {
    "failing_days": 760
  "ci-cri-containerd-e2e-gci-gce-es-logging": {
    "failing_days": 651
  "ci-cri-containerd-node-e2e-flaky": {
    "failing_days": 426
  "ci-kubernetes-e2e-windows-containerd-gce": {
    "failing_days": 235
  "ci-kubernetes-e2e-containerd-gce-netd-calico": {
    "failing_days": 232
  "ci-kubernetes-e2e-containerd-gce-netd": {
    "failing_days": 232
  "ci-cri-containerd-e2e-ubuntu-gce": {
    "failing_days": 177
  "ci-containerd-e2e-ubuntu-gce": {
    "failing_days": 177
  "ci-kubernetes-e2e-windows-containerd-gce-1-18": {
    "failing_days": 164
  "ci-cri-containerd-e2e-gci-gce-flaky": {
    "failing_days": 153
  "ci-cri-containerd-e2e-gci-gce-alpha-features": {
    "failing_days": 126
  "ci-containerd-soak-gci-gce": {
    "failing_days": 125
  "pr:pull-cri-containerd-windows-cri": {
    "failing_days": 119
  "ci-cri-containerd-node-e2e-benchmark": {
    "failing_days": 119
  "ci-cri-containerd-cri-validation-windows": {
    "failing_days": 119
  "ci-containerd-node-e2e-features-1-2": {
    "failing_days": 119
  "ci-kubernetes-e2e-aks-engine-azure-master-windows-containerd-hyperv": {
    "failing_days": 87
  "ci-kubernetes-e2e-aks-engine-azure-master-windows-containerd-hyperv-serial-slow": {
    "failing_days": 87
  "ci-containerd-node-e2e-features-1-3": {
    "failing_days": 60
  "ci-containerd-node-e2e-1-3": {
    "failing_days": 60
  "ci-containerd-node-e2e-1-2": {
    "failing_days": 60
  "ci-kubernetes-e2e-gci-gce-scalability-containerd-correctness": {
    "failing_days": 44
  "ci-kubernetes-e2e-aks-engine-master-windows-containerd-azuredisk-csi-driver": {
    "failing_days": 31

Despite that some of the boards at https://testgrid.k8s.io/sig-node-containerd do have a lot of green. It's not clear much of that green is relevant...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM can you open an issue?
FYI @vpickard @karan @bsdnet

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aaand you already did #18570 thanks

config/jobs/kubernetes/sig-node/containerd.yaml Outdated Show resolved Hide resolved
config/jobs/kubernetes/sig-node/containerd.yaml Outdated Show resolved Hide resolved
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 31, 2020
@tpepper tpepper force-pushed the old_release_branch_test_cleaning branch from 1e6f456 to 456e94f Compare July 31, 2020 19:21
Tim Pepper added 9 commits July 31, 2020 12:22
These are a set of old tests which have been failing for a long
time and are against an old enough codebase to be irrelevant for
the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
These are a set of old tests which have been failing for a long
time and are against an old enough codebase to be irrelevant for
the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
These are a set of old tests which have been failing for a long
time and are against an old enough codebase to be irrelevant for
the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
These are against an old enough codebase to be irrelevant for
the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
These are a set of old tests which in many cases have been failing
for a long time and are against an old enough codebase to be
irrelevant for the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
There is no longer CI on release-1.15 so it doesn't need skipped.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
These is an against an old enough codebase to be irrelevant for
the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
These is an against an old enough codebase to be irrelevant for
the upstream community.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
There is no longer activity on these branches.

Signed-off-by: Tim Pepper <tpepper@vmware.com>
@tpepper tpepper force-pushed the old_release_branch_test_cleaning branch from 456e94f to 8b8ea59 Compare July 31, 2020 19:23
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 31, 2020
Copy link
Member

@spiffxp spiffxp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 31, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dims, spiffxp, tpepper

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit 1b716f0 into kubernetes:master Jul 31, 2020
@k8s-ci-robot k8s-ci-robot added this to the v1.19 milestone Jul 31, 2020
@k8s-ci-robot
Copy link
Contributor

@tpepper: Updated the job-config configmap in namespace default at cluster default using the following files:

  • key containerd-cri-presubmit-jobs.yaml using file config/jobs/containerd/cri/containerd-cri-presubmit-jobs.yaml
  • key cloud-provider-vsphere-config.yaml using file config/jobs/kubernetes/cloud-provider-vsphere/cloud-provider-vsphere-config.yaml
  • key kops-presubmits.yaml using file config/jobs/kubernetes/kops/kops-presubmits.yaml
  • key eks-periodics.yaml using file ``
  • key release-1.15.yaml using file ``
  • key containerd.yaml using file config/jobs/kubernetes/sig-node/containerd.yaml
  • key conformance-e2e.yaml using file config/jobs/kubernetes/sig-testing/conformance-e2e.yaml
  • key dependencies.yaml using file config/jobs/kubernetes/sig-testing/dependencies.yaml

In response to this:

Following today's SIG Testing meeting on cleaning up test noise I was looking at some items in http://storage.googleapis.com/k8s-metrics/failures-latest.json and adjacent test configs. The PR is an attempt to start cleaning out things that were at most relevant back at times when we were dealing with releases which are no longer community supported. This obscures current work.

Signed-off-by: Tim Pepper tpepper@vmware.com

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dguendisch
Copy link
Contributor

Sorry for not noticing earlier, but with Gardener we actually

I understand the cleanup intent, but feel this is a bit overdone as the actively reported tests do have a certain value for our customers.
Is it possible to restore the gardener section and cleanup only the no longer reported tests (k8s <=1.11)?
(I can of course provide a PR but not sure I have the full picture)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/config Issues or PRs related to code in /config area/jobs area/provider/aws Issues or PRs related to aws provider area/provider/azure Issues or PRs related to azure provider area/provider/vmware Issues or PRs related to vmware provider area/testgrid cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants