-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HIL updates #1412
HIL updates #1412
Conversation
Also, I just noticed that when running all the tests, if a test failed, we keep running all the other tests. (I guess this should also happen for the examples at it reuses the same code), changed it to fail and exit if a test fails (04800cf). Happy to revert it in case we dont want this. |
Clippy lints should be resolved by #1410 |
Thanks! That's very useful! Running a single test was not too funny before :) I guess we can keep the aliases - they won't hurt but no strong opinion on that
I think ideally it would be good to try all tests and exit with a non-zero exit code if one failed. Otherwise, if multiple tests fail it might take several attempts to get CI green (but on the other hand users can just run them locally until it's fine - I guess that's preferrable). Locally I often run into the problem that if one test failed the following tests fail for no reason - so maybe there is no use to try the remaining tests, then? Probably good to hear other's voices here |
aaab769
to
c7c4e22
Compare
I think these can probably be removed. The
I think, assuming a failing test does not cause the failure of subsequent tests, it would be best to continue running all tests. In the case of a PR where multiple peripherals are affected, it could potentially take a number of HIL test runs in order to get things passing otherwise. While it would be nice to assume that everybody tests everything locally, I do not think this is a safe assumption to make. |
8137596
to
d84f62c
Compare
I reverted my changes and now, upon a test fail the other tests are also run, as it was initially. I've also removed the aliases as they wont be of any use when we grow our tests suit as theyll fail if they can't run all the tests regardless if the test is supported for that target unless we add probe-rs/probe-rs#2344 and maintain the skipped list. |
Just noticed at the moment, the workflow will get green even with some tests failing, so I will add some logic to run all the test but tracking the failed tests and throw an error printing that list at the end. Example of HIL run with fails that is green: https://github.com/esp-rs/esp-hal/actions/runs/8615443424/job/23610994749 |
416eb72
to
c926fba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating the run-tests
subcommand, handy to be able to build just a single test 😁
c926fba
to
42baad0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Thank you for your contribution!
We appreciate the time and effort you've put into this pull request.
To help us review it efficiently, please ensure you've gone through the following checklist:
Submission Checklist 📝
CHANGELOG.md
in the proper section.Extra:
Pull Request Details 📖
Description
This PR addresses a few different things:
-t/--test
to only run a test with thextask run-tests
commandxtask
subcommands in the readmehil-tests
depsembedded-tests
to allow custom executors, see Configurable executor probe-rs/embedded-test#30. Thanks @bjoernQ!xtask
package Clippy lintsI wonder if its still worth to keep the
hil-tests
aliases that run all the tests for a target.Testing
Execution of a single test:
Execution of all tests:
probe-rs
is still throwing some werid RTT errors, but those can be still ignored.probe-rs
used for tests is the one in the VMs (ddd59fa
), I've also tried the latestprobe-rs
but same output