-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for test run labels/selectors #483
Conversation
1741ab2
to
e003d4f
Compare
Signed-off-by: Marcin Owsiany <porridge@redhat.com>
Signed-off-by: Marcin Owsiany <porridge@redhat.com>
e003d4f
to
8898f17
Compare
Signed-off-by: Marcin Owsiany <porridge@redhat.com>
8898f17
to
e7d8400
Compare
FTR, binaries are up at https://github.com/porridge/kuttl/releases/tag/v0.15.0%2Bporridge.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nothing really blocking on this... it would be great to have a conversation around comments but if you feel good about. Documentation is missing and we should create an issue to capture that. we have a number of ways to skip or isolate tests down... like using --test
. I didn't think through it deeply... I suppose we can have awkward scenarios which is the responsibility of the user to understand.
something to be cautious of (going off intuition here). It seems likely that we want to "wait" for a running object with a label... or have an object started outside of kuttl that has a label. It seems we don't want to confuse users with different label context. This one is a selector for a test label (which isn't an object running in the cluster... it is a set of declarations to use during a test). Another type of selector is to select on running objects and I wonder if we need to delineate that... perhaps premature. However I've been thinking a while now about adding kuttl labels or annotations on objects it creates. We currently do not cleanup tests if we do not delete the namespace. I was thinking we could do clean up if we labeled or annotated the kuttl created objects. Great addition and thanks @porridge !
metav1.ObjectMeta `json:"metadata,omitempty"` | ||
|
||
// Which test runs should this file be used in. Empty selector matches all test runs. | ||
TestRunSelector *metav1.LabelSelector `json:"testRunSelector,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious why we use this structure... this doesn't feel like a selector... the selector is on the command line of running kuttl. This feels more like a label (that a selector would select on) which makes me wonder why we wouldn't use the standard k8s model which would look more like:
==> tt/01-assert-cat.yaml <==
apiVersion: kuttl.dev/v1beta1
kind: TestFile
metadata:
labels:
flavor: cat
thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This particular arrangement of which side labels vs selectors go was @eddycharly 's idea, so maybe he can chime in on where it came from. Maybe it's following some pattern seen elsewhere?
I guess you can think about this in two ways:
- (this implementation): a "test run" (i.e. a
kuttl
invocation) comes with a set of labels attached. These labels represent some characteristics of this test invocation. In my use case, there would would be a single label with just two possible values:openshift=true
oropenshift=false
, but you could imagine multiple labels and values for a more elaborate test matrix (such as the cartesian product ofplaform={openshift,vanilla,eks},k8s-version={1.28,1.29,1.30}
). Then individual files in the test suite provide an optional label selector, which needs to match the label set in order for the file to be considered for the test run. Obviously the selector can contain a subset of label keys that the test run has. For example a given file might only select one or morek8s-version
values and be neutral w.r.t.platform
. The notable edge case is a file that (be default) has no selector attached - an empty selector matches all possible label sets, meaning this feature is fully backwards-compatible - a file without aTestFile
is included in all test invocations, just like before this change. - (I haven't thought about this case earlier so I'm inventing things as I go.) Each test file comes with an optional label set (empty by default). Let's say that some file tests some API introduced in version 1.29 but also present in 1.30. Since a label set can only specify a single value for a given label key, we'd either have to allow multiple label sets per file (eek!) or introduce a copy of the file per k8s version (ouch) or instead of labeling by k8s version, label by feature availability
feature-foo={true,false}
. Then, each test run can have a label selector attached, and this selector would need to select the APIs supported. This could quickly create an explosion of labels and combinations. What's worse, files which are agnostic about a given API would have to somehow advertise that fact, in order to be selected by a selector which does mention a given API.
So it looks to me that only solution 1 is in fact realistic.
@@ -0,0 +1,30 @@ | |||
package cmd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure about name of file... values
could be anything... seems like labels
or something similar is more readable IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I called it values because the type implements apflag.Value
interface and I thought it might be a good name if we add more types that can be used via that interface. Maybe I should have inserted a comment about this...
But sure, this can be labels.go
too if we go for one file per type. Let me know which one you prefer.
Not sure what you mean here ☝️ ...
Do you mean docs for this change? Or on something that exists already?
Right. I corrected the description since what I had in mind is reuse of an individual test, rather than suite.
I think we should be fine if we clearly qualify labels or selectors as e.g. "test labels" vs "resource labels" or "object labels" in the docs. |
Signed-off-by: Marcin Owsiany <porridge@redhat.com>
6073d9d
to
0f0255a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
after a hangout conversation around this... this makes a lot of sense and it looks ready to go.
- Introduce a gke-operator e2e job. - Use a modified kuttl with support for conditional inclusion of files, see Support for test run labels/selectors kudobuilder/kuttl#483 - Split openshift-specific parts/variants of tests into separate files to make the test suite pass on GKE. The above broke validation of some files. Made the script skip them as a quick workaround. Proper solution tracked in a sister subtask.
What this PR does / why we need it:
Make it possible to associate label sets with test runs and put label selectors for individual files as discussed on Slack. The purpose is conditional ignoring of selected files with the goal of having a single test directory but with some variation to allow for testing on different platforms.
Why a new resource type?
I was not able to find any appropriate place.
TestStep
object applies to all files in a given step, and only single such object is permitted per step. This might be too coarse granularity, since for example in a given step we might only want to vary some assertions and keep the apply files constant.TestAssert
is used in a given step (although unlike forTestStep
, additionalTestAssert
objects seem to be silently ignored rather than complained about). Also,TestAssert
objects are only active if they appear in an NN-assert*.yaml file - they seem to be ignored in all other files, including errors files and varying the apply files is a goal.Simple usage example
Setup
A
flavor=cat
test runA
flavor=dog
runReal-world usage example
See this test directory (see the files ending in
(non-)openshift.yaml
).Fixes #379