-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Post-build script execution #545
Comments
This will likely happen as part of an implementation of |
What about integration tests? For example, PNaCl modules can be run on the host machine, but they have to be translated into a NaCl beforehand and then run with sel_ldr (from the NaCl SDK). |
@DiamondLovesYou, would #1411 work for the tests? |
A "post_build" script would be useful for things like creating a MacOS bundle, rather than having a separate script to do that work separately. "install" could do this too, but maybe not as useful for testing there? |
+1 For rust projects that linked dynamically from other languages being able to deploy(move) the .so after a successful build would be nice. |
This would be useful for running
To fix up the headers and such after building my project. |
+1 for post build scripts |
I'd prefer if cargo would generate mod_prometheus.so instead of libmod_prometheus.so but I did not find a way to tell cargo or rustc to not prefix the binary with 'lib'. A post-build step on cargo would be also an option if this were implemented: rust-lang/cargo#545
It doesn't have to mean that cargo will be a general purpose build system. Crate B depends on crate A, and is in a sub-directory of crate A, and it automatically gets built every time crate A has been built. |
But what if crate B also depends on sub crates C and D? Shouldn't the build of dependencies be caused by building the crate that depends on them in a "by need" manner? |
This would be useful for embedded systems programming. I often run |
Don't forget that there's nothing that says you have to use a non-Rust solution for these kinds of things; any "cargo-foo" executable on your $PATH works as "cargo foo", and can be written in any language, including Rust. Your custom command could invoke "cargo build" as a step first and then do anything. On Oct 29, 2016, 13:35 -0400, Mike notifications@github.com, wrote:
|
Yes, that is what I, and probably others, are already doing. It would just be more convenient to integrate this into the already familiar |
Not to mention that, in my experience, if you try to enforce your will too strongly, you start to get behaviours similar to what has been standard in the home inkjet printer market for well over a decade now. (Where the first thing you see when you open the box is a slip that warns you, in big letters, to never plug the printer in until after you've installed the software off their CD, because Microsoft's automated driver installation workflow doesn't allow them to install their fancy, tray-resident, bloated, talking print drivers and their official position is that, if Microsoft's installer has had a chance to run, your OS is now broken and must be repaired first.) Better to have an official way to run a post-build script than to see everyone and their dog reinventing broken wheels... especially since, as a Stubborn Person™, I can easily foresee these horrors:
(Also, keep in mind that, as Rust sees more corporate uptake, we'll see more "I'm too busy to learn the 'proper' way, so I'll just hack something in" WTFs. That's just a basic rule of programming in general.) |
In embedded development it's useful to run |
This would be useful for stuff like
when developing Windows drivers in Testmode. |
This would be useful for packaging debian applications, especially when combined with configuration |
|
Why not do this: |
@Boscop I like that idea but I worry that executing another binary every compilation might slow down iterative development. Maybe using |
I think there won't really be a slowdown compared to having different scripts for success/failure because if your postbuid.rs script checks the build status first and then returns if it's not success/failure, it won't really slow things down. The postbuild.rs script won't have to be recompiled every time, only when it changes. So calling it will just incur the cost of starting an executable that immediately returns. |
That sounds reasonable. |
@Boscop idea of the |
Not that I'm advocating for it per se, but Cargo already is a general purpose build system due to the cargo plugins (e.g. So the real question becomes: How ergonomic do we want to make this general-purpose build system? |
|
It's not a standard thing with To put it another way, if |
I do think it's neccessary to provide an official hook as post-build scripts, rather then
Two relevant cases I got are,
Possible implemention could be:
|
Perhaps support for the occurrence of scripts throughout the build process couldbe accommodated an enhanced with an in memory database or file system to facilitate communication or resource sharing between scripts... pre_build I can't see where such a thing wouldn't be immediately valuable. However, isn't there an existing need to sandbox build scripts somehow such as by restricting resource access through a monitoring agent. |
For myself, I feel like post-build scripts would be an incremental but incomplete improvement that is a dead-end. What we really need is a build orchestrator above cargo for solving these kind of use cases. For example, if you want a universal binary, you need a post-build script that combines the result of two targets. I wrote more of my thoughts at https://epage.github.io/blog/2023/08/are-we-gui-build-yet/ |
@epage could you please give examples of prior art for such an orchestrator? Do you mean meta build systems like Nix/Bazel/Buck? |
@epage an external build system can run a command after the whole build completed, but it can't run after the build of each crate, so it doesn't completely replace a Cargo integration for postbuild scripts. (one use case is to patch each object file after they are built, but before linking) @nvzqz perhaps he is referring to something like just or cargo-make |
I just found cargo-post which seems like a nice workaround. However it always run the |
All the use cases I've seen are for final artifacts only (bins or SOs). what do you want to do to the rlibs? |
@epage right now, indeed, I don't! But here's an use case: what if I wanted to replace all instances of a certain function call in a crate for something else? (be it a call to another function, or even something more custom) One idea to implement this is with a custom MIR -> MIR compiler pass, but I'm not sure this is even available and if it is, it's certainly nightly-only with not prospects for stabilization (I guess it would require stable MIR, at least) But we could operate at a lower level. There are already lower level tools like wasm-snip that replace function bodies with no-ops, to save space if the function is only called by dead code that wasn't eliminated by the optimizer. But that operates on the whole artifact. But if such a tool could operate on individual crates, it could allow for more fine grained manipulations, for other purposes. Something like: I want that all allocations done in this crate to be replaced by a panic, but I don't want to instrument the code or anything; I could just add a postbuild script to this crate to modify the rlib (or dylib or whatever). |
After a long time I came to conclusion that |
For the complex cases, perhaps, but it doesn't feel reasonable to require every Cargo integration (eg. VSCode plugins) to add support for inserting a project-specific wrapper around Cargo calls just so you can do stuff to the final artifact like non-automatic codesigning which may need to happen on some targets before something like That feels like it's being disproportionately unfair to, for example, platforms which aren't self-hosting and require either emulation or integration with a remote device such as mobile and bare metal targets. |
Scripts invoked by hooks and provided pre-compiled libraries offering API into the build process can perform all manner of inspection and augmentation of the build throughout. In fact, too much perhaps; consequently, a sandbox, permission system, or a constrained API seems equally essential to manage the scope, at least for scripts of external dependencies. Ultimately, restricting scripts to a predetermined collection of domain specific libraries could very efficient, manage permissions of such scripts, and yet offer an extensive API into the build process. |
A way to enable that was already proposed and accepted... it's just waiting for someone to implement it: ...though it'll probably be for proc macros first since they're much more likely to be pure. See also #5720 |
@ssokolow oh, sure, I'm not against it. I just happen to have that more complex case and it'd be nice being able to get the correct list of dependencies. (Not a big deal since that is only needed for edge cases now.) Or better said, for my case post-build scripts (to generate man pages, completions...) are not very suitable while for generating binaries they are. So ideally we should have support for both. You make good points about tests & stuff. |
A workaround for some use-cases is using a workspace and using the |
So I argued on the officiality. It could be solved with third party |
FWIW, as a workaround I have had some success (I'm building a MacOS xcframework which needs to be consumed by Swift and C code) with the following - this only is practical if your projects are set up as a Cargo workspace:
While it's not pretty, and it involves a lot of fishing around in build directories, it does work as a way to have code that can only run after a dependency is built. I do not think this sort of thing is outside the scope of Cargo's purpose - it is a very common activity to need to post-process a binary or generate some additional artifacts that should only be generated at the conclusion of a successful build - foreign language bindings are the perfect example of that. |
How did you setup the dependencies in this do-nothing-project so that the original project has finished compiling and produced a I tried to use build-dependencies (so that it will be built before build.rs starts) but that attempts to link the original project to build.rs code which is not what I want. |
In an OS project I am doing, I need the compiled bootloader files to be put into position and then have a virtual machine run using those files. My current solution is a hacked together shell script that I annoyingly must run in place of |
perhaps you can call the script via |
Those coming from other languages with different package managers might expect a postbuild option and be sad not to find it. I hope something comes up, or atleast a standardized way to do it which doesn't involve experimenting with different packages for a basic use case |
Currently, cargo execute scrips before the build starts with the
build
field. I propose renamingbuild
topre_build
and addingpost_build
(which would run after every successful build). It's useful for general postprocessing: running executable packers, zipping files, copying stuff, logging, etc.The text was updated successfully, but these errors were encountered: