-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mini-RFC] Tracking issue for single source cross-compilation #51623
Comments
(Nitpick: please note that we've moved from |
It's not super clear what the motivations and goals here are. I see a lot of interesting technical details wrt prerequisites, but not much about the actual MIR-based cross-compilation backend. Today, a Rust crate, once compiled, is locked into a target (via e.g. cc @rust-lang/compiler |
It would be anything that can codegen MIR. My goal is to use LLVM codegen, but there's no reason that's required. I was not aware of https://github.com/MaikKlein/rlsl. It looks interesting! Sadly, the optimizations done by LLVM are too nice to forego.
Btw, I shouldn't have said ABI, at least for my use case. There are no ABI issues to be had, as there are no system libraries to call. All "function" call arguments are serialized/deserialized explicitly by the runtime/gpu-kernel into/from a single byte array. The gpu side deserialization is handled by the runtime, by inserting a wrapping function around the desired function-to-codegen. Generally, one wants to avoid all serialization/deserialization, other than simple copies, when passing data between processors. Based on what I've seen in CUDA/C++AMP/HIP, the coprocessor (GPU, in my case) is forced to use whatever struct layout the host uses. Thus, having target specific Stuff baked into the MIR poses no issues. I say this w.r.t. host<->gpu. I'm currently ignoring linking other GPU libraries (ie gpu code only).
Kinda? At compile time, the crate will always single target. It's up to the runtime crate to load all of metadata all crates, and codegen So to answer your second question, the plugin intrinsic is used to grab the I should have mentioned that it is possible to change the MIR before it is sent to codegen. So, for example, if I wanted to run most of a function (think all the way down the call tree) on my GPU, but the function had cold path calls to system libraries, the runtime library could (basically, there's more to this) write those calls to post to the host's AQL queue (see this) from the GPU and then have the host call the required function. Here is a "simple" dispatch example, taken from the HSA Foundation documation: void simple_dispatch() {
// Initialize the runtime
hsa_init();
// Retrieve the kernel agent
hsa_agent_t kernel_agent;
hsa_iterate_agents(get_kernel_agent, &kernel_agent);
// Create a queue in the kernel agent. The queue can hold 4 packets,
// and has no callback or service queue associated with it
hsa_queue_t *queue;
hsa_queue_create(kernel_agent, 4, HSA_QUEUE_TYPE_SINGLE, NULL, NULL, 0, 0, &queue);
// Since no packets have been enqueued yet, we use zero as
// the packet ID and bump the write index accordingly
hsa_queue_add_write_index_relaxed(queue, 1);
uint64_t packet_id =0;
// Calculate the virtual address where to place the packet
hsa_kernel_dispatch_packet_t* packet =
(hsa_kernel_dispatch_packet_t*) queue->base_address + packet_id;
// Populate fields in kernel dispatch packet, except for the header,
// the setup, and the completion signal fields
initialize_packet(packet);
// Create a signal with an initial value of one to monitor the task completion
hsa_signal_create(1, 0, NULL, &packet->completion_signal);
// Notify the queue that the packet is ready to be processed
packet_store_release((uint32_t*) packet, header(HSA_PACKET_TYPE_KERNEL_DISPATCH),
kernel_dispatch_setup());
hsa_signal_store_screlease(queue->doorbell_signal, packet_id);
// Wait for the task to finish, which is the same as waiting for
// the value of the completion signal to be zero
while (hsa_signal_wait_scacquire(packet->completion_signal, HSA_SIGNAL_CONDITION_EQ,
0, UINT64_MAX, HSA_WAIT_STATE_ACTIVE) != 0);
// Done! The kernel has completed. Time to cleanup resources and leave
hsa_signal_destroy(packet->completion_signal);
hsa_queue_destroy(queue);
hsa_shut_down();
} It's very similar to how IPC code is written.
I'm not sure I understand what you mean. This method is directly applicable to grabbing the MIR of existing functions. No need for any syntactic changes. The runtime has access to all of the MIR, let it decide. We can also override the default providers when the runtime codegens. So we can use a custom
One can use any codegen module one wants, including hand rolled. My use case is directed toward native targets so I don't have to write my own codegen module (and write optimizations (!!!!!!!!) which is no small feat. LLVM is quite good) 😄 Sorry, that was a bit long winded! |
The problem is that that MIR is tainted with target-specific information, either via If you're using serialization, why not compile the crate twice, from source, for two different targets? |
I suppose I assumed that. Such properties are desirable for my use case (GPUs on big(-ish) iron Linux). These devices will operate on the same memory, and so almost require such specialization anyway. Regarding Sure, it's not perfect (whatever that means, anyway), but it is convenient.
That would be awful. Think of all the The work-arounds I've mentioned are possible because I have something that can rewrite those things by using HSA runtime features (which require Linux kernel support). No way such rewriting code would get accepted into Rust, nor should it, imo. |
Allows plugins to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once codegen encounters a call to the intrinsic. In this initial version, the codegen crate is reponsible for calling the plugin intrinsic codgen trait and codegening the resulting extra MIR statements. Plugins are limited to `mir::StatementKind` and those contained therein, so they can't insert extra basic blocks, or insert calls to arbitraty functions. This means the compiler can still reason about what is reachable. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
These days this would need to go through a major change proposal at least if it's truly compiler-internal, or a full RFC if not: https://github.com/nikomatsakis/rfcs/blob/major-change-proposal/text/0000-compiler-major-change-process.md -- closing in favor of that. Thanks! |
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
Allows drivers to define functions which are only expanded by the plugin after all type checking, inference, etc etc, ie only once during mono item collection. The form of the interface with the plugins is slightly different than what was proposed in rust-lang#51623. Additionally, signature checking is added.
As suggested, this issue is about the Rust single source cross compilation story. I've had some success with this while I was working on
mir-hsa
and the AMDGPU Stuff. In my case, the HSA API is used, which accepts GPU native ELF files directly, and therefore enables me to not have to write a new codegen backend. I have done this with the following changes to Rust:rustc-intrinsic
s, which run post monomorphization to have access to only concrete types and usage. They are limited to inserting MIR statements just before the originalTerminatorKind::Call
. This is useful because all Rust functions have a unique type. Thus a call the intrinsic like this one (note the upvar stuff is nonfunctional, however. I need to put more thought into that part) will “return” theDefId
(rather, equivalent info) of the function passed in. After expansion, trans rewrites the terminator to be a direct goto. This is the plugin that expands thekernel_info_for
intrinsic mentioned/linked earlier.This expansion originally occurred during trans, like when traditional LLVM intrinsics are handled. However, I think it could be made to happen before collection and partitioning. Either way, this implementation as is allows crate authors to use plugin intrinsics and not force downstream crates to also load that plugin (downstream crates can't be able to call the intrinsics generically).
Here is the trait for the plugin registry:
The extra statements needed to store the return values are put into
extra_stmts
, which are translated just after the function returns. The other parameters are so one has the needed things to monomorphize types, and for debugging info.My impl had Rust just trusting that the "return" value provided by the plugin matched the type specified by the intrinsic declaration. This is probably not what we want long term.
A codegen/”debugging” flag to always save/keep Rust’s cstore metadata in outputs. Combined with
-Z always-encode-mir
, we now have every function’s MIR available at runtime. My original impl made this hard coded ‘cause I’m lazy, so I’ll need to fix this before issuing a PR.An extra codegen/”debugging” flag to force
librustc_mir::monomorphize
to translate everything, without relying on upstream definitions and a linker. I made this inaccessible to everyone except specializedrustc
drivers, ie can’t userustc -Z retrans-all-deps ..
, to prevent misuse.Make
librustc_metadata::schema
public, so one can useCrateRoot
and friends. Here is how the metadata was loaded (finding every dylib, including dylibs not actually mapped into the current process, is done elsewhere).Make Rust accept a mono item collector root override. This is used by a special
rustc
driver at compiled program runtime to rerun trans for specific functions.Issues (as implemented, so mostly issues related to my runtime impl):
mir-hsa::runtime
crate) will need to know about the upvars of a closure so that they can get mapped on to the GPU.Send
like trait:NumaSend
. This is needed so that types can do appropriate serialization of inner memory regions when they are sent to other memories. Doesn’t seem like there is a whole lotta thought into this area, yet.Has anyone else worked on any single source prototype? I see this topic on discuss.rust-lang.org, but nothing recent.
Related: #51575 and #38789
RFC of sorts, so please criticize. I'm going to submit patches for the proposed changes above, and I would like to at least get something functionally equivalent accepted.
@eddyb
Edit log:
The text was updated successfully, but these errors were encountered: