-
-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate build from execution for integration tests #1352
Comments
This would be good also just for the general practice of making sure you know what your runtime dependencies are vs. build time dependencies, without needing a separate set of build+test jobs in CI. |
So you could do:
And provided none of those args trigger a rebuild from build scripts etc, it should skip building again (this is at the compilers discretion not mine however). I'll look into doing a better workflow for this in some way though |
Thanks for that, I missed the significance of those in the docs. I thought they were more for "smoke" tests. I will give this a go. Note that the workflow is more like:
I haven't confirmed this, but the things I am wary of are (a) could the updated timestamps on the source tree thwart the |
so you still need cargo to get crate metadata so it does currently require the toolchain. I could potentially create some sort of recipe file describing the built artefacts with Tarpaulin does rely on having the source code available and the rust toolchain so it might take a bit of fiddling to get things to gel nicely. But I have no objections to working on adding this for the next release |
Don't take too much from my vague description there, I am still trying to make both ends of a CI pipeline meet in the middle so it's hard to describe any concrete issue I have at this point. Fundamentally, though, I have two independent goals that relate to build/test separation:
I've been messing around with
At the moment I think tools like |
Ah so for embedded targets I did explore this a bit in https://github.com/xd009642/llvm-embedded-coverage but that was more for embedded no-std targets. That said the binary bloat from llvm coverage ELF sections meant I was struggling to fit a hello world like test onto my target (it just about fit iirc). And the probe-rs libraries for sending data over JTAG for debug logs etc didn't seem to guarantee data would be sent/received which complicated getting the profiling stats out. For those type of systems my line of thinking was using probe-run which is more like a breakpoint based coverage to insert breakpoints and step through the tests - no additional bloat to the binary beyond debug info and no-std targets are usually single threaded anyway. But I hadn't explored further because there hasn't been much demand. If you wanted to hack on it |
Oh a bare-metal After reading a tonne of |
I've read through the rust article about coverage and some of the source code. A logical separation would have:
On the surface it can still be combined as Custom test runners like For custom Docker images, it'd be beneficial to have Let me know how this sounds, there's no doubt I missed some details, but I think the general idea sounds promising. |
First note, point 2 doesn't require the llvm build tools if done via tarpaulin because we implemented our own profraw parsing to avoid the faff of installing extra binaries. I would probably just change that one into running tests. There is a bit of faff with nextest I have to figure out, it can't work with the ptrace backend and llvm coverage has issues that stop it working for certain project types. For a start building things and packaging up the test executable and a serialized json or something to give some metadata about the tarpaulin settings etc and then giving tarpaulin a way to take that and run it all would be maybe simpler to implement and get in |
If profraw parsing is integrated then yeah, step 2 could be inside step 3 and step 2 would run tests. Later today I'll try to follow the logic through source code and see where each stage meets |
Is there any particular reason why the args use a builder pattern instead of derives? Separating the args would be a good first step for me to try and figure this out. |
It was initially written before clap_derive existed - and also before structopt had it's first proper release. So largely an act of legacy. When clap 3 came out I tried to move over to it but I found areas where it seemed I couldn't just simply derive a struct with the same behaviour without changing my config struct. And my config struct is used for serializing config files so there's an element of backwards compatibility that's caused inertia in that respect. I will happily accept a PR which moves it to modern clap provided it doesn't break anything for existing users - but I've largely prioritised other features or upgrades (the syn 2 upgrade is one I've been struggling on which would be a bigger win for perf/compile-time) |
I'm confused by what "change args" means here. What I need to do in order to use this is basically: build the rust daemon, run some python tests that use that built daemon, report coverage. Ideally be able to combine with "cargo test". I've tried lots of variants of this but it appears cargo tarpaulin only works by invoking the target itself? |
I think an example problem case would be: cargo tarpaulin --no-run --bin project1
# [...]
cargo tarpaulin --skip-clean --bin project2 You can mismatch many more options but simply using the wrong binary would fail. What I believe xd009642 wants is to prevent that, most likely by providing some sort of test 'descriptor' that gets generated during the build and will provide subsequent steps with configuration that matches the build step. I'll try to find some time on the weekend to split up configuration for each step, since we need to know the common elements between each stage. |
Oh, OK, so that comment was about what could go wrong, not a recipe I could follow to do what I want. I think you're saying my use case is not possible with tarpaulin: that is, it's not possible to run a built binary separatelyand subsequently collect coverage from that run with tarpaulin. |
Not right now at least, this issue is about allowing that |
Hello, I've got a question whether it's possible or maybe there exists a dedicated mechanic to building the instrumented binary and generating reports separately.
The use case would be integration tests where I'd like to separate building (requires cache) from running (requires special environment). It doesn't really matter for me if result interpretation is done directly after running or in a separate job, but this may be a good place to answer that too. :)
I've roughly looked over the available flags and tried to find issues, but nothing struck out.
The text was updated successfully, but these errors were encountered: