-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update tolerated flavors integration test #179
Conversation
Add to description that this is now possible after #135, with it being the default. |
Also, should we have a similar test where we ensure that a smaller later workload is not scheduled when using StrictFIFO. |
@@ -189,7 +190,7 @@ var _ = ginkgo.Describe("Scheduler", func() { | |||
|
|||
ginkgo.It("Should schedule jobs on tolerated flavors", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
specify the queueing strategy
@@ -103,6 +103,7 @@ var _ = ginkgo.Describe("Scheduler", func() { | |||
prodBEClusterQ = testing.MakeClusterQueue("prod-be-cq"). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MakeClusterQueue uses StrictFIFO by default. We should make it match the API default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, and we should have a test for StrictFIFO testing that a workload continues to be blocked if another fronting it is inadmissible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, should we have a similar test where we ensure that a smaller later workload is not scheduled when using StrictFIFO. Although it might be flaky before #164
why will it be flaky?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking that we could end up scheduling the second workload while we are asynchronously doing the API update.
However, since we assume the first (blocking) workload, I guess that shouldn't be possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MakeClusterQueue uses StrictFIFO by default. We should make it match the API default.
Added.
ginkgo.By("checking a second job without toleration doesn't start") | ||
job2 := testing.MakeJob("on-demand-job2", ns.Name).Queue(prodBEQueue.Name).Request(corev1.ResourceCPU, "5").Obj() | ||
gomega.Expect(k8sClient.Create(ctx, job2)).Should(gomega.Succeed()) | ||
createdJob2 := &batchv1.Job{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can put this inside the Consistently
clause since it is not used later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could use QueuedWorkload (instead of Job) and use this
kueue/test/integration/framework/framework.go
Lines 151 to 168 in 425187b
func ExpectWorkloadsToBePending(ctx context.Context, k8sClient client.Client, wls ...*kueue.QueuedWorkload) { | |
gomega.EventuallyWithOffset(1, func() int { | |
pending := 0 | |
var updatedWorkload kueue.QueuedWorkload | |
for _, wl := range wls { | |
gomega.ExpectWithOffset(1, k8sClient.Get(ctx, client.ObjectKeyFromObject(wl), &updatedWorkload)).To(gomega.Succeed()) | |
idx := workload.FindConditionIndex(&updatedWorkload.Status, kueue.QueuedWorkloadAdmitted) | |
if idx == -1 { | |
continue | |
} | |
cond := updatedWorkload.Status.Conditions[idx] | |
if cond.Status == corev1.ConditionFalse && cond.Reason == "Pending" && wl.Spec.Admission == nil { | |
pending++ | |
} | |
} | |
return pending | |
}).Should(gomega.Equal(len(wls)), "Not enough workloads are pending") | |
} |
The advantage is that it uses Eventually
instead of Consistently
, using the QW status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please move forward with this PR? I think we can keep this test as is without changing to workload, we can do the migration for the whole file separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also change to use Workload (instead of Job) in this test case and use ExpectWorkloadsToBePending
here.
we can do the migration for the whole file separately.
+1 We can use workload in new added test case. And migration the old tests for the whole file in a separate pr.
wlCopy := &kueue.Workload{} | ||
gomega.Eventually(func() bool { | ||
gomega.Expect(k8sClient.Get(ctx, client.ObjectKeyFromObject(wl1), wlCopy)).To(gomega.Succeed()) | ||
return wlCopy.Spec.Admission != nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return wlCopy.Spec.Admission != nil | |
return wlCopy.Spec.Admission |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Aldo Did you suggest to change it like this?
expectAdmission := testing.MakeAdmission(prodClusterQ.Name).Flavor(corev1.ResourceCPU,onDemandFlavor.Name).Obj()
gomega.Eventually(func() kueue.Admission {
gomega.Expect(k8sClient.Get(ctx, client.ObjectKeyFromObject(wl1), wlCopy)).To(gomega.Succeed())
return wlCopy.Spec.Admission or *wlCopy.Spec.Admission
}, framework.Timeout, framework.Interval).Should(gomega.Equal(expectAdmission))
wlCopy.Spec.Admission
is a point of kueue.Admission
. We still need to check if it's nil before getting value of it. And we'd better not continuity to check the value of Admission
, we just need to check it when the first time it's not nil.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops. My recommendation is to return *kueue.Admission
.
gomega.Equal
and pkg/util/testing.Equal
should be able to handle it.
}, framework.Timeout, framework.Interval).Should(gomega.Equal(pointer.Bool(false))) | ||
gomega.Expect(createdJob3.Spec.Template.Spec.NodeSelector[instanceKey]).Should(gomega.Equal(spotTaintedFlavor.Name)) | ||
wlCopy := &kueue.Workload{} | ||
gomega.Eventually(func() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gomega.Eventually(func() bool { | |
gomega.Eventually(func() kueue.Admission { |
wl3 := testing.MakeWorkload("on-demand-wl3", ns.Name).Queue(prodQueue.Name).Toleration(spotToleration).Request(corev1.ResourceCPU, "5").Obj() | ||
gomega.Expect(k8sClient.Create(ctx, wl3)).Should(gomega.Succeed()) | ||
|
||
gomega.Eventually(func() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
gomega.Eventually(func() *kueue.Admission { | ||
gomega.Expect(k8sClient.Get(ctx, client.ObjectKeyFromObject(wl1), wlCopy)).To(gomega.Succeed()) | ||
return wlCopy.Spec.Admission | ||
}, framework.Timeout, framework.Interval).Should(gomega.Equal(expectAdmission)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
}, framework.Timeout, framework.Interval).Should(gomega.Equal(expectAdmission)) | |
}, framework.Timeout, framework.Interval).Should(testing.Equal(expectAdmission)) |
or testingutil
, depending on the alias. That reports using cmp.Diff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
Updated.
Signed-off-by: Alex Wang <wangqingcan1990@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great!
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor, denkensk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Alex Wang wangqingcan1990@gmail.com
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for your reviewer: