-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding Kubelet E2E Lock Contention job #20106
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: knabben The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/area test |
@SergeyKanzhelev: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-test-infra-integration |
/reopen |
@knabben: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @ike-ma |
@SergeyKanzhelev: GitHub didn't allow me to assign the following users: ike-ma. Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Ack. Review in progress. |
@@ -204,6 +204,37 @@ periodics: | |||
testgrid-tab-name: node-kubelet-orphans | |||
description: "Contains all uncategorized tests, these are tests which are not marked with a [Node] tag. All node tests should be marked with [NodeFeature] or [NodeSpecialFeature] or [NodeAlphaFeature] or [NodeConformance] classification. Also skipped are [Flaky], [Benchmark], [Legacy]." | |||
|
|||
- name: ci-kubernetes-node-kubelet-lock-contention | |||
interval: 4h |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to understand the context, any particular reason on why we are using a 4 hour intervals? Ditto for the two timeout value at ln#217 and ln#229?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are duplicated, nice catch. the 4 hours interval was a standard for these jobs and there's no particular reason for this job. Any suggestion for a better value?
- --node-test-args=--kubelet-flags="--exit-on-lock-contention --lock-file=/var/run/kubelet.lock" | ||
- --node-tests=true | ||
- --provider=gce | ||
- --test_args=--nodes=1 --focus="\[NodeFeature:LockContention\]" --restart-kubelet=false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider update --restart-kubelet
argument passing based on final decisions kubernetes/kubernetes#97028
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this PR should be merged after kubernetes/kubernetes#97028
/lgtm |
This added a new tab
kubelet-gce-e2e-lock
running tests focused onNodeFeature:LockContention
with the proper flags for the suite.Ref #20105