Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip tests when missing language interpreters, rather than failing them. #4236

Closed
data-sync-user opened this issue Jun 23, 2021 · 4 comments
Closed

Comments

@data-sync-user
Copy link
Collaborator

data-sync-user commented Jun 23, 2021

As suggested in mozilla/uniffi-rs#464 (review), it's a little hostile to expect that everyone working on UniFFI will have all of the supporting software for all supported languages installed on their development machine in order to run cargo test. That's what we have CI for!

Instead of throwing an error when e.g. the Ruby interpreter or the Swift compiler is not available, can we do something similar to "skip test" so that it doesn't fail local builds? (And then in CI, require that no tests be skipped?).

┆Issue is synchronized with this Jira Task

@data-sync-user
Copy link
Collaborator Author

➤ Ryan Kelly commented:

It doesn't seem to be possible to dynamically decide to skip a test in Rust (e.g. you can't do a raise SkipTest like you could in Python) but Rust does have support for marking tests as ignored ( https://doc.rust-lang.org/book/ch11-02-running-tests.html ):

#[test]
#[ignore]
fn this_test_wont_run_by_default() {
assert!(false, "should not be run by default");
}Luckily for us, all the foreign-language tests are auto-generated via the build_foreign_language_testcases macro ( https://github.com/mozilla/uniffi-rs/blob/d58e92f58a48e65b0c6ec10c0cb22d103a266e50/uniffi_macros/src/lib.rs#L30 ) macro. What we could do, is have this macro inspect the system to see whether it's possible to run each of the target tests, and include the #[ignore] annotation automatically if it is not. It's kind of poor form for macros to inspect the system like that, but I think it would be fine in this case.

One nice thing about this approach is that in CI, we could run the tests with cargo test -- --ignored to force it to run any ignored tests, helping us make sure that the tests actually work in CI even if they don't run on individual dev machines.

@data-sync-user
Copy link
Collaborator Author

➤ Ryan Kelly commented:

What we could do, is have this macro inspect the system to see whether
it's possible to run each of the target tests, and include the #[ignore] annotation automatically if it is not.

Concretely, we'd have to expose a new function for each language backend like can_run_tests or similar, that inspects the system and returns a boolean whether we can run the tests.

Then, we'd have to move some of the logic for figuring out which language backend corresponds to which test file out of the run-time code in uniffi::testing and into the build-time code in the build_foreign_language_testcases macro. This logic would be expanded to call the appropriate can_run_tests function and emit or omit the #[ignore] attribute as appropriate. Luckily it is already a procedural macro rather than a macro_rules! macro, because I don't think you could do that sort of conditional logic from the later.

We'd also want a simple in-memory cache of the results of calling can_run_tests, similar to how uniffi::testing remembers which crates it has already compiled.

@data-sync-user
Copy link
Collaborator Author

➤ Mark Hammond commented:

Concretely, we'd have to expose a new function for each language backend like can_run_tests

Even in that scenario, a simple check that kotlinc works will succeed, but tests will fail unless I set CLASSPATH to some obscure value.

Another simpler option (albeit with worse developer ergonomics) is to have some kind of opt-out scenario. If the error messages noted how to opt-out, it might be a reasonable option.

@data-sync-user
Copy link
Collaborator Author

➤ Ryan Kelly commented:

Even in that scenario, a simple check that kotlinc works will succeed, but tests will fail unless I set CLASSPATH to some obscure value.

Right; in practice I think it would have to actually try to execute a tiny sample program in order to tell whether running the tests would succeed.

Another simpler option (albeit with worse developer ergonomics) is to have some kind of opt-out scenario.
If the error messages noted how to opt-out, it might be a reasonable option.

I also wonder if splitting the language backends into their own crates (ref mozilla/uniffi-rs#299 (comment)) would help with this. Then if you don't want to run e.g. the Kotlin tests, you don't cargo test the uniffi-bindgen-kotlin crate. Not sure how that would work with our examples though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants