Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Float-free libcore (for embedded systems and kernel drivers, among other things) #1364

Open
emk opened this issue Nov 11, 2015 · 64 comments
Open
Labels
T-dev-tools Relevant to the development tools team, which will review and decide on the RFC. T-libs-api Relevant to the library API team, which will review and decide on the RFC.

Comments

@emk
Copy link

emk commented Nov 11, 2015

(I was talking to @huonw about embedded Rust the other day, and he suggested I write this up as an RFC issue. I hope this is in the correct place!)

I'm having a ton of fun hacking on kernels in Rust. Rust is a wonderful fit for the problem domain, and the combination of libcore and custom JSON --target specs makes the whole process very ergonomic. But there's one issue that keeps coming up on #rust-osdev: libcore requires floating point, but many otherwise reasonable environments place restrictions on floating point use.

Existing discussions of this issue can be found here:

Datum 1: Some otherwise reasonable processors do not support floating point

There's always been a market for embedded processors without an FPU. For the most part, these aren't pathologically weird processors. The standard ARM toolchain supports --fpu=none. Many of the older and/or lower-end ARM chips lack FPUs. For example, the FPU is optional on the Cortex-M4.

Now, I concur (enthusiastically) that not all embedded processors are suitable for Rust. In particular, there are processors where the smallest integer types are u32 and i32, making sizeof(char) == sizeof(uint32_t) == 1 in C, and where uint8_t literally does not exist. There were once quite a few CPUs with 36-bit words. I agree that all these CPUs are all fundamentally unsuitable for Rust, because Rust makes the simplifying decision that the basic integer types are 8, 16, 32 and 64 bits wide, to the immense relief of everybody who programs in Rust.

But CPUs without floating point are a lot more common than CPUs with weird-sized bytes. And the combination of rustc and libcore is an otherwise terrific toolchain for writing low-level code for this family of architecture.

Datum 2: Linux (and many other kernels) forbid floating point to speed up syscalls and interrupts

Another pattern comes up very often:

  1. Everybody likes CPUs with a lot of floating point registers, and even a lot of vector floating point registers.
  2. Saving all those floating point registers during a syscall or hardware interrupt can be very expensive. You need to save all the registers to switch tasks, of course, but what if you just want to call write or another common syscall?
  3. It's entirely possible to write large amounts of kernel code without needing floating point.

These constraints point towards an obvious optimization: If you forbid the use of floating point registers in kernel space, you can handle syscalls and interrupts without having to save the floating point state. This allows you to avoid calling epic instructions like FXSAVE every time you enter kernel space. Yup, FXSAVE stores 512 bytes of data.

Because of these considerations, Linux normally avoids floating point in kernel space. But ARM developers trying to speed up task switching may also do something similar. And this is a very practical issue for people who want to write Linux kernel modules in Rust.

(Note that this also means that LLVM can't use SSE2 instructions for optimizing copies, either! So it's not just a matter of avoiding f32 and f64; you also need to configure your compiler correctly. This has consequences for how we solve this problem, below.)

Possible solutions

Given this background, I'd argue that "libcore without floats" is a fairly well-defined and principled concept, and not just, for example, a rare pathological configuration to support one broken vendor.

There are several different ways that this might be implemented:

  1. Make it possible to disable f32 and f64 when building libcore. This avoids tripping over places where the ABI mandates the use of SSE2 registers for floating point, as in Compiling libcore without SSE leads to LLVM ERROR: SSE register return with SSE disabled rust#26449. The rust-barebones-kernel libcore_nofp.patch shows that this is trivially easy to do.
  2. Move f32 and f64 support out of libcore and into a higher-level crate. I don't have a good feel for the tradeoffs here—perhaps it would be good to avoid crate proliferation—but this is one possible workaround.
  3. Require support for soft floats in the LLVM & rustc toolchain, even when the platform ABI mandates the use of SSE2 registers. But this is fragile and cumbersome, because it requires maintaining a (custom?) float ABI on platforms even where none exists. And this is currently broken even for x86_64 (Compiling libcore without SSE leads to LLVM ERROR: SSE register return with SSE disabled rust#26449 again), so it seems like this approach is susceptible to bit rot.
  4. Compile libcore with floats and then try to remove them again with LTO. This is hackish, and it requires the developer to leave SSE2 enabled at compilation time, which may allow SSE2-based optimizations to slip in even where f32 and f64 are never mentioned, which will subtly corrupt memory during syscalls and interrupts.
  5. Other approaches? I can't think of any, but I'm sure they exist.

What I'd like to see is a situation where people can build things like Linux kernel modules, pure-Rust kernels and (hypothetically) Cortex-M4 (etc.) code without needing to patch libcore. These all seem like great Rust use cases, and easily disabling floating point is (in several cases) the only missing piece.

@steveklabnik
Copy link
Member

/cc rust-lang/rust#27701

@hanna-kruppe
Copy link

[Option 4] Compile libcore with floats and then try to remove them again with LTO. This is hackish, and it requires the developer to leave SSE2 enabled at compilation time, which may allow SSE2-based optimizations to slip in even where f32 and f64 are never mentioned, which will subtly corrupt memory during syscalls and interrupts.

Since LLVM will implement small-ish memcpys by going through XMM registers, this is bound to happen. For example: [u64; 2] copies in release mode. So this option is right out.

@alexcrichton
Copy link
Member

I think there's actually a bit of a matrix here which can be helpful when thinking about this, we've got the two vectors of "libcore explicitly uses floating point" and "llvm codegens using floating point". Looking at the possibilities here:

With this in mind, I think it may be better to frame this around "disabling floating point support in generated code" rather than specifically omitting it from libcore itself. For example if we look at the above matrix, if LLVM is allowed to use floating point registers, then there's no reason to omit the support from libcore anyway (modulo the fmod issue, which I think is somewhat orthogonal to the usability in kernels).

As a result, this may lead itself quite nicely to a non-invasive implementation. For example on intel processors there may be something like #[cfg(target_feature = "sse2")] which we could use to gate the emission of f32/f64 trait implementations in libcore. To me this makes more sense than "pass a semi-arbitrary cfg flag to libcore and also disable some codegen". I would personally be more amenable to a patch like this to libcore, and note that this also naturally extends itself well I believe to "libcore supports floats if the target does" so weird architectures may be covered by this as well.

@emk
Copy link
Author

emk commented Nov 11, 2015

@rkruppe I agree completely. If we don't want SSE2 instructions, we should tell LLVM not to generate them. Generating them and then trying to remove them will obviously fail.

@alexcrichton Thank you for clarifying the issues! If I understand it, you're proposing two things here:

  1. Floats should always be included in libcore if the target supports them, and excluded if it doesn't.
  2. This could be implemented by using conditional declarations like #[cfg(target_feature = "sse2")] in libcore.

Am I understanding you correctly? If so, I agree that (1) sounds like a perfectly plausible way to address these issues. But if you intended (2) as a literal proposal (and not just an abstract sketch of an implementation), then I'm not convinced it's the right way to go.

The problem with writing something like #[cfg(target_feature = "sse2")] is that libcore would need to know about every possible platform, and you'd quickly wind up with something like:

#[cfg(or(target_feature = "sse2", target_feature = "x87", target_feature = "neon",
         target_feature = "vfp", target_feature="vfp4", target_feature = "soft_float"))]

...just to cover the Intel and ARM architectures. And depending on how you implemented it, that conditional might have to appear multiple times in libcore. This seems like it would be both ugly and fragile.

Some possible alternatives might be:

 #[cfg(target_feature = "float"))]

…or:

 #[cfg(target_float))]

The advantage of these approaches is that libcore wouldn't need to contain long lists of target-specific features, and the decision-making process could be moved closer to librustc_back/target, which is in charge of other target-specific properties.

Logically, this information feels like it would be either:

  1. A TargetOption, or
  2. Something that librustc_back/target could infer from Target and TargetOption, using code that knows how to interpret features like sse2 and neon.

I'd guess that (1) is fairly easy to implement, and it would work well with the target *.json files. (2) would require adding new Rust code for each architecture, to interpret features correctly. Either would probably work.

But in the end, I'd be happy to implement just about any of these approaches—whatever works best for you. Like I said, my goal here is to provide a long-term roadmap for safely writing things like kernel modules using libcore, and I'm happy with anything that gets us there. :-)

@alexcrichton
Copy link
Member

@emk you're understanding is spot on, that's precisely what I was thinking. I'd be fine extending the compiler to have a higher level notion of "floating point support" and disabling that means something different on every platform, and adding a particular #[cfg] for that seems fine to me!

Some prior art here could be the recent addition of the target_vendor cfg gate. It's both feature gated (e.g. not available on stable by default) but defined in json files as well.

@emk
Copy link
Author

emk commented Nov 12, 2015

Great, thank you for the pointer to target_vendor.

Let me sketch out a design to see if I'm in the right ballpark.

Right now, TargetOptions has a bunch of fields that control specific kinds of code generation features, including disable_redzone, eliminate_frame_pointer, is_like_osx, no_compiler_rt, no_default_libraries and allow_asm.

We could add a has_floating_point to TargetOptions, and a #[cfg(target_has_floating_point)] option behind a feature gate. We could also use better names if anybody wants to propose them. :-) This #[cfg] could be used to conditionalize f32 and f64 in core.

This way, we could define a kernel-safe x86 target using something like:

    "features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-3dnow,-3dnowa,-avx,-avx2",
    "has-floating-point": false,
    "disable-redzone": true,

I think that this would actually be a fairly small, clean patch. (Alternatively, we could try something more ambitious, where has_floating_point automatically implied the corresponding features list, but that would probably require adding another field named something like features_to_disable_floating_point to TargetOptions.)

Would this design work as is? If not, how could I improve it? If we can come up with a basically simple and satisfactory design, I'd be happy to try to implement it. Thank you for your feedback!

@alexcrichton
Copy link
Member

@emk yeah that all sounds good to me, I'd be fine if disabling floats just implied all the necessary features to pass down to LLVM so they didn't have to be repeated as well.

@emk
Copy link
Author

emk commented Nov 16, 2015

@alexcrichton Thank you for the feedback!

For a first pass, I'll try to implement has_floating_point in TargetOptions. If we want that to automatically disable the corresponding features, though, we'd probably still need to specify what that means in the target file, at least in the general case:

    "features": "...",
    "disable-floating-point-features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-3dnow,-3dnowa,-avx,-avx2",
    "has-floating-point": false,
    "disable-redzone": true,

Above, we disable floating point, and then we need explain what that means, so that the compiler can do it for us.

I'm not sure that's really an improvement over:

    "features": "-mmx,-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-3dnow,-3dnowa,-avx,-avx2",
    "has-floating-point": false,
    "disable-redzone": true,

Can anybody think of a better design? I'm definitely open to suggestions here, and I'm sure I don't see all the use cases.

And thank you again for your help refining this design!

@alexcrichton
Copy link
Member

Hm yeah that's a good point I guess, the features being passed down do probably need to be generic. It's a little unfortunate that you can still construct an invalid target specification by disabling floating point and not disabling the features, but I guess that's not necessarily the end of the world.

@emk
Copy link
Author

emk commented Nov 17, 2015

OK, I've planned out a block of time to work on this week (hopefully by midweek-ish, if all goes well).

@steveklabnik
Copy link
Member

Ran into this again today :) @emk did you get a chance to work on it at all?

@emk
Copy link
Author

emk commented Dec 12, 2015

Not yet! I'm bounding between two different (free-time) Rust projects right
now, and this affects the other one. You're welcome to steal this out from
under me if you wish, or you can prod me to go ahead and finish it up as
soon as possible. :-)

Le sam. 12 déc. 2015 à 12:02, Steve Klabnik notifications@github.com a
écrit :

Ran into this again today :) @emk https://github.com/emk did you get a
chance to work on it at all?


Reply to this email directly or view it on GitHub
#1364 (comment).

@steveklabnik
Copy link
Member

Okay :) It's not mega urgent for me, either, so I might or might not :)

@phil-opp
Copy link

As a workaround, I created https://github.com/phil-opp/nightly-libcore. It includes thepowersgang's libcore patch.

@alilee
Copy link

alilee commented Dec 14, 2015

Great idea guys. Though is getting rid of floating point only part of the picture?

Excuse me if I don't have the full perspective, but what I need is to disable (ARM) neon/vfp instructions in bootstrap/exception handler code so that I know that it won't require the fpu to be enabled or the fpu register file to be saved. (llvm-arm seems to like vldr and vstr for multi-word moves).

I would want to link with a core containing the fpu routines, but know that certain sections don't access the fpu registers. If I understand things, the features are defined at the root compilation unit making it hard to set compiler features for a sub-crate or sub-unit?

@Amanieu
Copy link
Member

Amanieu commented Dec 16, 2015

How would such an option affect the f32 and f64 types in the language? Would any use of these types become an a compile-time error?

@ticki
Copy link
Contributor

ticki commented Dec 16, 2015

How would such an option affect the f32 and f64 types in the language? Would any use of these types become an a compile-time error?

Nope. They'll just not be included in libcore.

@nagisa
Copy link
Member

nagisa commented Dec 16, 2015

You cannot “not” include primitive types. They are a part of the language.

@ticki
Copy link
Contributor

ticki commented Dec 16, 2015

@nagisa Of course not. They're primitive. But you can stop providing an API for them, which is what this RFC suggests.

@nagisa
Copy link
Member

nagisa commented Dec 16, 2015

@ticki I guess quoting the original question is the best here, since there seems to be some misunderstanding:

How would such an option affect the f32 and f64 types in the language? Would any use of these types become an a compile-time error?

@Amanieu The answers are “in no way” and “no”. You would still be able to use floating point literals using the notations (0.0, 0.0f32, 0.0f64, 0E1 etc) you use today, use the types f32 and f64 (but not necessarily the values of these types) anywhere where the use is allowed today and use your own (possibly, software) implementations of operations on floating point values to do calculations.

@carlpaten
Copy link

Just chiming in to say that this is an issue I'm hitting too.

@ghost
Copy link

ghost commented Dec 27, 2015

Whatever we decide as the solution, I think that it should be user-friendly enough that low-level crate developers can also selectively omit floating-point code from their crates. It should also be documented in the Rust Book so that people know about it.

By that, I mean that instead of a large list of targets to omit, we should definitely have a #[cfg(float)] or similar that people can remember and use easily. I see tons of potential errors and maintenance bugs with having to copy-paste a large attribute every time.

@MagaTailor
Copy link

Will it be possible to do things that llvm disallows?

https://llvm.org/bugs/show_bug.cgi?id=25823
http://reviews.llvm.org/rL260828

@ticki
Copy link
Contributor

ticki commented Feb 20, 2016

@petevine Generally, no. LLVM's assertions are there for a reason. It will almost per se lead to strange bugs.

@parched
Copy link

parched commented Jul 26, 2016

Well, I don't think it's actually needs to be disabled, it's for storing and restoring fp registers, so it's never going to be used if mmx and sse are disabled.

@Amanieu
Copy link
Member

Amanieu commented Jul 26, 2016

In any case the compiler won't be generating that instruction on its own without explicitly calling the intrinsic for it.

@parched
Copy link

parched commented Jul 26, 2016

Yes, so actually it's probably better not to disable this for kernel development as it could be useful intrinsic when switching user threads.

@japaric
Copy link
Member

japaric commented Jul 30, 2016

Disclaimer: I'm not an x86 kernel/OS dev :-)

Can someone confirm that using features: -sse,+soft-float,etc does everything that x86 kernel devs want i.e. fast context switches (LLVM doesn't use/emit SSE/MMX instructions/registers), core can be compiled without modification (no need to cfg away chunks of it) and floating point operations (e.g. x:f32 + y:f32) lower to software routines (e.g. __addsf3)? If yes then it sounds like this issue is solved and there's no need to modify the core crate or do some other language-level change to support this use case. Is my conclusion correct?

@parched
Copy link

parched commented Jul 30, 2016

I can confirm they lower to software routines as expected and only use general purpose registers . I can't confirm this is enough for the Op, but believe it should be so.

@Amanieu
Copy link
Member

Amanieu commented Jul 31, 2016

I just tested this on AArch64 and it works fine with "features": "-fp-armv8,-neon". Note that you will need to recompile compiler-rt since these options change the ABI (in particular, floating-point values are passed in integer registers).

@japaric
Copy link
Member

japaric commented Jul 31, 2016

@parched @Amanieu Thanks for checking!

Note that you will need to recompile compiler-rt since these options change the ABI (in particular, floating-point values are passed in integer registers).

Interesting! Given that we would ultimately like to have Cargo build compiler-rt intrinsics when you build core/std, perhaps we'll have to add a target_float_abi field to target specifications; that way Cargo can check that field to build compiler-rt with the right float ABI. Or perhaps we'll port compiler-rt intrinsics to Rust before that becomes necessary.

@Amanieu
Copy link
Member

Amanieu commented Jul 31, 2016

In practice you can probably get away with a standard compiler-rt as long as you don't use any floating point values in your code.

@comex
Copy link

comex commented Jul 31, 2016

From a quick grep of LLVM, it seems all that flag does is enable/disable the FXSAVE and FXRSTOR instructions in the assembler.

Hmm; does that mean that if you pass -sse, you can't use SSE instructions in inline assembly blocks? Because that sounds somewhat annoying.

@parched
Copy link

parched commented Jul 31, 2016

does that mean that if you pass -sse, you can't use SSE instructions in inline assembly blocks?

No it doesn't apparently (I just tested), but why would you want to use SSE instructions if you have turned it off?

@jdub
Copy link

jdub commented Jul 31, 2016

The reason to turn it off is you don't want uncontrolled use of SSE (or, context switch unsafe) instructions, or use of them anywhere near ABI boundaries. Controlled use is fine, and desirable – you're probably going to want fast crypto, vector acceleration, etc. at some point.

@parched
Copy link

parched commented Jul 31, 2016

Well in that case I would turn +sse back on for just that compilation unit then you get the benefit in rust code too.

@jdub
Copy link

jdub commented Jul 31, 2016

That'd be pretty uncontrolled. Generally, a few inline functions to safely wrap assembly blocks will suffice.

@nrc nrc added T-libs-api Relevant to the library API team, which will review and decide on the RFC. T-dev-tools Relevant to the development tools team, which will review and decide on the RFC. labels Aug 19, 2016
@carlpaten
Copy link

What's the "state of the union" on this particular issue? Do we still need to use libcore_nofp.patch?

@ketsuban
Copy link
Contributor

ketsuban commented Sep 14, 2016

The state appears to be that according to @japaric (link] and @parched (link) using the new soft-float feature causes floating-point values to be stored in general-purpose registers and manipulated via software implementations. @Amanieu notes that you'll need to recompile compiler-rt since the float ABI has changed, but the standard one will work if you never actually use a floating-point value, so my inference is you'll still need to recompile something, just not necessarily libcore.

@parched
Copy link

parched commented Sep 15, 2016

Well, you shouldn't really recompile anything. 'soft-float' (or equivalent for other targets) should already be set as a feature of the target (or not) which you shouldn't change with 'rustc' codegen options otherwise stuff won't link probably.
Obviously if you need to create a new custom target for this then all of 'core'/'std' needs to be compiled as usual with a custom target.

@Mart-Bogdan
Copy link

I would like to add, that it's actually possible to use floats inside Linux kernel:

	kernel_fpu_begin();
	...
	kernel_fpu_end();

But with some restrictions. This function is not rentable, is pretty undocumented and AFAIK would make current thread non-premptable.

The same story for Windows Kernel, thou is more documented. And it's seems it also makes thread non-preemptable (if I understand what "disables all kernel-mode APC delivery" means).

So to sum this up. In C world you could use floats in the kernel if made precautions, but it's encouraged to use it in localized code and disable FPU as soon as possible.

If we are talking about C float-related code could be moved to a separate .c file that is compiled with different options and then linked together with main code.

Not sure how that could be addressed in Rust and with libcore. We could also use linker and move FPU code to separate crate, called using C ABI. But perhaps a better solution could be worked.

P.S. I'm not experienced with this stuff, googled this stuff, and decided to add this info to the current thread as it's not covered in discussion.

@Serentty
Copy link

This is a real pain for writing Rust for the Commodore 64 and other 6502 machines. Currently core needs to be patched to disable floating point.

@Qix-
Copy link

Qix- commented Mar 23, 2024

Hitting this myself today, though not sure if this is a Rust language thing or an LLVM thing. The compiler appears to be emitting fmov instructions on aarch64 in a kernel context, which explicitly doesn't have CPACR_EL1[FPEN] set, which is causing ptr::volatile_write() to fail due to it being emitted in debug builds in the precondition check for pointer alignment.

Is there really no way to shut off floating point register / instruction emission in Rust after 9 years? GCC has had -mgeneral-regs-only for ages.

@Amanieu
Copy link
Member

Amanieu commented Mar 23, 2024

You can use the aarch64-unknown-none-softfloat target to generate AArch64 code which doesn't use the FP registers.

@Qix-
Copy link

Qix- commented Mar 23, 2024

@Amanieu that doesn't seem to fix it when specifying it in "llvm-target": "aarch64-unknown-none-softfloat" in the target JSON file. I'm still getting an fmov emission.

<_ZN4core3ptr9const_ptr33_$LT$impl$u20$$BP$const$u20$T$GT$13is_aligned_to17h3f6ddebdb141eeffE+32>  fmov  d0, x1

EDIT: Finally found the full list of features in LLVM. I needed to turn off most of the FP features:

"features": "+strict-align,-neon,-fp-armv8,-sm4,-sha2,-sha3,-aes,-crypto,-crc,-rdm,-fp16fml,-sve,-sve2,-sve2-aes,-fptoint",

Thank you @Amanieu :)

@Amanieu
Copy link
Member

Amanieu commented Mar 23, 2024

This is a builtin rustc target, available through rustup.

If you're already using a target json then you can base it on the built-in json for the target:

rustc --print target-spec-json --target aarch64-unknown-none-softfloat -Z unstable-options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T-dev-tools Relevant to the development tools team, which will review and decide on the RFC. T-libs-api Relevant to the library API team, which will review and decide on the RFC.
Projects
None yet
Development

No branches or pull requests