-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow parameterized tests #2176
Comments
You could do this with a helper rule or comprehension today. For example: methods := {"POST", "DELETE", "PUT", "PATCH"}
test_method_not_allowed {
disallowed_methods == methods
}
disallowed_methods[m] {
some m
methods[m]
not allow with input.method as m
} It would be nice to know which methods failed. If the test framework was a bit smarter it could report diffs on the variables inside the test rules. For example:
It's important to use a helper rule or comprehension though--this would not be correct: # THIS IS NOT CORRECT. The test will pass if SOME `m` satisfies the query (not EVERY `m`)
test_method_not_allowed {
some m
methods[m]
not allow with input.method as m
} Once we have the |
Better visualization of diffs in assertions would indeed be great, and perhaps a feature request of its own :) A notable difference between that and parameterization though - in your example you feed data into a single test and iterate over that within the test, whereas in "data driven" parameterized testing the data builds the tests, so given the example (and some way of annotating or marking a test as parameterized - in my made up example it was [input.method] after the
Running
|
Another case, albeit quite a different one, I've been looking at recently is piping kubernetes resource lists into conftest and the like (
This works in that it will report all failures, but since it's from a single rule it will have no notion of tests that passed:
One alternative is of course to call conftest/OPA individually per object, but then that's gonna miss out on the total.. plus of course the overhead of starting those application hundreds of times. If we could parameterize part of the input to be part of the reported rule name, each test could be counted. I should note though that this use case is still more in the "thinking out loud" department than the previous one. |
This issue has been automatically marked as inactive because it has not had any activity in the last 30 days. |
Circling back to this as I recently had a need for something like this again. BTW, the conftest issue mentioned above has been solved in conftest, and what I wanted there is now possible. I think that a simple extension to the test framework allowing partial rules to be used would be the most elegant, Rego native, way of dealing with this: test_method_not_allowed[method] {
some method in ["POST", "DELETE", "PUT", "PATCH"]
not allow with input.method as method
} All that the test runner would need to keep track of is the
As this is already how OPA deals with partial rules, it seems like it would be quite a simple addition 🤔 |
This issue has been automatically marked as inactive because it has not had any activity in the last 30 days. |
Just if anybody else is running across this issue: This is now possible using the every keyword: |
While |
I think test_method_not_supported { # test_method_not_supported is true if...
every method in {"PUT", "PATCH", "DELETE"} { # for every method in ...
not allow with input.method as method # not allow with ... is true
}
} If the condition attached to the As far as providing better assertion failure reporting goes... I wonder how far we could get by plugging all vars on the failed expression and then eliding elements in large collections... this approach would be nice regardless of whether
|
Yeah - we shouldn't exit early in that case though as we'd want to know if there were more than one assertion that failed. But maybe we don't do that in tests. The assertion failure reporting would be an improvement for sure! |
@anderseknert you are right, that part is still open. I just think that #2176 (comment) is already a huge improvement over #2176 (comment). |
Right - it's a cosmetic improvement for sure, but I'd argue that the part that is "still open" is the actual parameterization. In some ways, EDIT - I just had another look at the original suggestion, and that too would be subject to early exit, so no difference in that vs. using every. Using partial rules (as suggested before) for this would avoid early exit, and feels like an "OPA native" way of doing it. The problem with that approach is that we currently lack a way of differentiating success vs failures in a single "partial test", as we'd need a way to keep track of what's being iterated over, and count both failure conditions and successes. With better assertion reporting, and disabling of early exit in tests, we'd get somewhere close to that using |
This issue has been automatically marked as inactive because it has not had any activity in the last 30 days. |
I double on what @anderseknert says about the parametrized tests, the early exit and the output of the param used for each run is indeed important to have that feature done. IMHO the |
I described on https://github.com/orgs/open-policy-agent/discussions/514 our current issue trying to find a way to test all the rules possibilities to ensure business coverage of them. Having this parametrized feature was exactly what I was looking for. |
@tsandall / @c-thiel there is a major problem using the
|
That seems like a bug @marcaurele 😅 could you provide a snippet showing what that code looks like? |
@anderseknert I opened a new bug in order to not mix it up with this feature request: #6400 |
This issue has been automatically marked as inactive because it has not had any activity in the last 30 days. Although currently inactive, the issue could still be considered and actively worked on in the future. More details about the use-case this issue attempts to address, the value provided by completing it or possible solutions to resolve it would help to prioritize the issue. |
It would be very useful if the unit test system allowed for some type of simple "data driven" parameterization of tests where one of the mocked inputs could be parameterized to skip needless verbosity and possible copy-paste errors.
Expected Behavior
(obviously made up example and syntax)
Actual Behavior
The text was updated successfully, but these errors were encountered: