-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine an idiomatic way of sharing or transferring resources between Interrupt and User contexts #294
Comments
I can see at least a few different use cases: Sharing a variable between the main thread and only one interrupt handlerThis is for something like a semaphore the ISR can signal to tell the main thread to perform an action, or some data received by the ISR that it wants to send to the main thread, or the main thread and the ISR sharing a buffer that the main thread fills and the ISR drains. The main thread can never pre-empt the ISR, and since no other ISR shares the variable, it should be possible for the ISR to get direct exclusive non-panicking access. The main thread would need to disable the ISR (but only that one ISR) to get safe exclusive access. The variable might be statically initialised (in the simplest case), or might have to be early-runtime initialised. We might be able to support statically initialised variables first and only later have a good solution for runtime-initialised: you can always use an Sharing a variable between the main thread and one or more interrupt handlersThis is the much harder case where any ISR might access the variable and might pre-empt another ISR, which seems like it leaves us with either whole-program analysis a la RTFM or critical sections a la the current Moving a variable from the main thread to one interrupt handlerMain needs to initialise or obtain some variable (typically a peripheral instance) and only the ISR needs to access it. By definition this is not a statically initialised variable but rather something initialised at runtime then moved. Once moved you'd want the ISR to have direct and non-panicking access. Anything else? I think even just addressing the simplest version of the first use case (statically initialised, shared between main and one ISR only) would be a huge win. |
Re: Moving a variable; it would be nice to have a way to move the variable back if we shut down the interrupt. This could be useful when swapping different interrupts throughout the run time, but probably not strictly necessary for a first useful approach. Additionally this approach is useful if you are using something like BBQueue where you have SPSC guarantees already, and you just need to give one or more producers/consumers to the interrupt handler so it can fill/drain events/data as necessary (without a mutex or semaphore). Re: one or more interrupt handlers; yeah, I think we need the RTFM approach to avoid deadlock guarantees, but it might be useful to have something like Shared for this usecase, with a big red "you must BYO guarantees about deadlock avoidance", or somehow never allow more than one resource handle to be held at any time. Re main thread and only one interrupt handler; I think you nailed it. I would say one possible interesting item that this point would allow would be something like ping-pong/double buffers, where the main thread can move the buffer from 1->2, and the interrupt can move the buffer from 2->1 in a safe way with shared memory. |
Sure. It's already deadlock-free on Cortex-M since |
@adamgreig Ah sorry, you are correct, in the other thread we talked about the case where only one interrupt would be disabled, and I wasn't thinking of the "total critical section" case we currently have. Honestly I think having something like an |
Actually the current |
What would be the underlying hw mechanism on which it should be implemented? Because if we're talking about single core and there is an interrupt involved, anything else but a way to mask/disable the interrupt doesn't seem to cut it. /confused Still, is this supposed to be a mechanism only for single core? |
Adding to the discussion. We are working on a multicore extension of RTFM (prototype has been up and running and under evaluation for some months already). The most straightforward approach is to have Tasks and Resources associated to cores by the programmer, allowing shared resources only within a single domain. This might sound too restrictive, but given that message passing is implemented using lock free queues, we only need to ensure that atomicity across cores are enforced. In practice this allows zero cost data propagation between cores. Not exactly sure haw this As mentioned in other issue (#15), we would be happy to see actual examples where RTFM does not fit Your bill of embedded programming, and from there suggest and/or develop suitable patterns. (And I don't agree that hiding a potentially panicking Mutex behind a newtype would be a better abstraction than the guaranteed race- and deadlock free access you get from RTFM). Best regards/ |
@perlindgren Is there any meaningful way to make RTFM not require as much macro magic? Even if it means driving some upstream development? As the thing that presents me with the most issue from RTFM is the use of macros to generate a new DSL within rust, I would favour something that was more purely expressed in regular rust syntax. |
@perlindgren I think the rub here is that I would like to support users who do, and do not use RTFM. I could believe that use of RTFM, or a similar tool which has whole-program visibility, is perhaps to only way to guarantee zero cost overhead towards safe code. However, I think there is still value in a low-cost, yet safe set of abstractions that could be used outside of the context of RTFM. If you believe it is only valuable to develop Embedded Rust in the context of RTFM, I might suggest that you submit an RFC making that a primary/official goal of the Working Group. |
@eddyp Yeah, my suggestion for This likely would only be safe across a single core. |
Solutions that don't depend on procedural macros, off the top of my head: A. Signaling from ISR to main
B. Sharing between ISR and main
C. Moving from main to ISR
|
RTFM is expressed in regular Rust syntax; attributes, which With crate level attributes you can reduce the number of required annotations by #![app]
// runtime initialized static
// (you can't get rid of this because there's no static-level type interference)
static mut SERIAL: Serial = (); // or `= UNINIT` (w/e syntax you prefer)
fn init() {
// ..
// initialize the static
SERIAL = Serial::new();
}
fn main() -> ! {
loop {
// ..
}
}
#[interrupt]
fn USART0() {
// "move" into this ISR (you only get `&mut Serial`, though)
let serial = SERIAL;
// do stuff with `serial`
}
The DSL is used to express "before, after" constraints and ownership / sharing As the syntax / features are not general purpose I don't see them ever being |
That is not necessarily a downside and certainly not a correctness problem. Also one could move the protected resource into the ISR upon first use if the ISR is supposed to be the exclusive owner to get rid of the critical section if performance really is an issue.
I don't have any problems with deterministic panics. Even better would be if the compiler could figure it out and warn about it. |
Ok, that syntax is something I feel much more comfortable with, I think i was mostly being thrown off by the weird const stuff in the current version of RTFM. |
@japaric Can't we have:
|
Out of interest, in what situations does this panic? As I'm pretty sure i'd made my implementation of the shared wrapper non-panicking.
Can we not avoid using an allocator by having the interrupt handler controller pre-allocate a static block of memory for all the handlers? (possibly with ways to reduce that allocation if you can just choose specific handlers you are expecting to use). As I personally find this method very very attractive because of its similarity to how similar patterns work in full fat systems with sharing data to threads and such. |
@tom, @james, Rust aims at bringing compile time safety (fearless programming has been used in this context), and I think to that end RTFM succeeds. Then there is the question about magic.
A little background: RTFM was initially designed at Luleå University of Technology (LTU) as a coordination language, using C code for describing actions. As a reasearch platform for experimenting it was great. The tooling was written in OCaml, and supersimple to extend/alter and play around with (took me about 3 months to develop the language, compiler and run-time systems for ARM, Linux, OSX, and even Windows). BUT, C as a language made it easy to step aside of the model of computation (Tasks/Resources) and in fact at the same time impossible to prevent. Implementing a (new) language from scratch with stricter semantics (needed for proofs) requires huge effort, so we were hesitant to go that path. In the process I came across Rust (which provides the strict semantics needed), and I started playing around re-encoding the RTFM task and resource model in Rust. While being a beginner I ran into all kind of traps (missing the mem::forget/swap while relying on Drop etc...). Luckily I came to know Japaric, and we ended up prototyping what eventually was released as RTFM 1.0.
The approach was to use the type system of Rust to statically verify that the (user) given ceilings were sufficient for race free execution. By using a lot of "meta programming" under the hood (Jorge implemented all this), the Rust compiler could in this way ensure the required invariants for SRP based scheduling (so RTFM 1 delivered executables free of run-time checks/panics, with very low OH). However we also learned that using the rust type system for "meta programming" can be cruel - both to us developing the framework, as well as to the end user (e.g., making a function generic over a set of resources requires a lot of trait bounds).
So at that point, we knew pretty much the limitations of "meta programming" in Rust. So we were evaluating three options.
1. The Best..... Make the Rust compiler RTFM aware. A compiler plugin like a `linter` (e.g. `clippy`) with total view of the system, while at the same time being capable of producing code. This would allow us to analyze the Resource dependencies of any rust application, even cross crates. However, as the internals of the Rust compiler is highly unstable, any change of the compiler might require the compiler plugin to be updated. (That's the reason `clippy` is shipped with compiler not as a separate crate/application.) We moved on to other options ....
2. Second Best.... Make an external build system, analyzing the application and generate code (that's how the original RTFM for C was done.) Essentially the work would amount to a Rust frontend, parser, type inference, etc. + functionality for extracting info cross crates. We concluded that the work required to make such a build system would be massive. (There is now a project rust-analyzer https://github.com/rust-analyzer/rust-analyzer that might pose usable, but it was not available at that point...). In any case, we decided to look further.
3. Third Best... Make use of what Rust can offer in terms of procedural macros. Well aware that new syntax might rub people the wrong way:
- In RTFM 2 we aimed to give the `app!` macro a Rusty look and feel (adopting a struct like syntax for the required metadata).
- In RTFM 3 we streamlined the user API reducing the inconvenience of double derefs, etc. We worked hard to provide user and static friendly dynamic data types, the `heapless` crate. Collected user experiences by introducing Rust and Rust RTFM in various courses (3) at LTU.
- In RTFM 4 we further streamlined the user API moving towards attributes in favor of the struct like approach of RTFM 2 and 3. At that point a lot of effort also was put into stabilization issues, finding workarounds for being able to run RTFM 3 and 4 onto a stable toolchain (and it now does). Moreover RTFM 4 gives you message passing and timing semantics (postponed message delivery).
In comparison to the original RTFM for C, Rust RTFM 4 now re-implements most of the functionality (the original RTFM derives priorities from deadlines, computes and allocates queues matching the number of potentially outstanding messages, this is not yet in Rust RTFM 4).
You may ask, what took you so long? Why did it take 10 times longer to re-implement Rust RTFM than the original RTFM C version. Main reasons:
A) RTFM for C was a HACK, since intended for experimentation it was Ok. And as C was used in the user code, correctness could in any case not be proven, so no effort made to soundness, its merely a proof of concept...
B) In the development of Rust RTFM 1,2,3,4, lots and lots of attention was spent on finding the right abstractions, finding clever and sound ways to implement the underlying functionality, testing validating, characterizing OH, making documentation, developing course material, teaching, listening, understanding and taking into account needs and requirements.
So back to the question. Could we reduce the amount of magic related to RTFM?
- Yes and no. Fundamentally we need to be able to trace the resource dependencies to derive resource ceilings at compile time (this is inherent to the theory of the Stack Resource Policy). Without that, we are thrown back to dark ages of threads, where things start blowing up in your face at run-time.<https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf>
If we go by options 1 or 2 (best/second best), we might be able to do so without syntactic sugar (no macros needed) but the required effort is beyond our resources. 1 still the Rust compiler is internally unstable, so such work needs to be managed/driven by the compiler team. 2, going in the direction of external tooling also brings in another aspect, we would like the spans from our user code to be present in the final executable for debugging reasons among other things, so there are some unresolved issues besides that the approach requires huge efforts.
So where are we, and where do we go from here?
The design decisions for RTFM 4 is an important step towards standard Rust. We anticipate that crate wide (or even cross crate wide) information will be available to procedural macros (attributes). In that, we could lift the restriction that the RTFM app wraps the whole application. Then it would be no different using RTFM then any other (custom) attribute in Rust.
Would it then reduce the amount of magic involved? No, its just a syntactic thing, it will make it easier to use, but the semantics does not change.
What about the magic then, can we do anything about it?
Yes, we can certainly further improve documentation, examples etc. to better convey the mental event driven model.
- Naming conventions: Should it be `lock` (or as previously `claim`) etc. nothing is written in stone here.
- Extensions: We have been experimenting howto express `multilock` (taking more than one resource at the time). Things like that can/should be further investigated.
- Should we distinguish `lock` and `lock_mut` (that would allow us to do some further optimizations relating multi-unit resource management and scheduling), but at the same time introduce more syntax.
- Multicore support: As mentioned in other post, we have a prototype already up and running. Mental model remains the same.
As seen neither of those changes/additions/extensions changes the fundamental principles behind RTFM and SRP based scheduling. Interestingly enough looking at how ARM implemented the NVIC it actually screams SRP at you, that is why our implementation can be made truly zero cost, but that's another story.
And here is the thing, to me RTFM is just about the `fairy magic` you need (nothing more, nothing less).
In comparison we have the alternative `black magic` of threads lurking around the corner. Here is what Edvard Lee of UC Berkley has to say on that matter (not just me ranting:). https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf
Should RTFM be a part of the Rust tool chain?
- Not sure, perhaps better as an external framework at least for now, but we can draw benefits from new Rust compiler features (crate/cross crate proc macros, const generics etc.).
Do we need other primitives for shared resources on bare metal?
- Not sure, we have yet to see any bare metal problem that RTFM does not cover.
What do you think?
Best regards
Per
…________________________________
Från: James Munns <notifications@github.com>
Skickat: den 22 januari 2019 13:15:40
Till: rust-embedded/wg
Kopia: Per Lindgren; Mention
Ämne: Re: [rust-embedded/wg] Determine an idiomatic way of sharing or transferring resources between Interrupt and User contexts (#294)
@eddyp<https://github.com/eddyp> Yeah, my suggestion for arc_singleton!() was perhaps overly bold. What I had in mind was a statically allocated mutex that would be initialized on first access. Sort of like lazy_static!() + the cortex-m::Mutex.
This likely would only be safe across a single core.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#294 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AD5naIBbR93LZUB_BcDl294bmhCGLnqaks5vFwDsgaJpZM4aBHzm>.
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/rust-embedded/wg","title":"rust-embedded/wg","subtitle":"GitHub repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open in GitHub","url":"https://github.com/rust-embedded/wg"}},"updates":{"snippets":[{"icon":"PERSON","message":"@jamesmunns in #294: @eddyp Yeah, my suggestion for `arc_singleton!()` was perhaps overly bold. What I had in mind was a statically allocated mutex that would be initialized on first access. Sort of like `lazy_static!()` + the `cortex-m::Mutex`.\r\n\r\nThis likely would only be safe across a single core."}],"action":{"name":"View Issue","url":"#294 (comment)"}}} [ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "#294 (comment)", "url": "#294 (comment)", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]
|
@perlindgren There is a lot to unpack there, and I hope to be able to give a longer answer later. I do appreciate the history and discussion of the design constraints you have worked with. However, I want to reiterate that I am not against usage of proc macros, nor even the current syntax of RTFM4. I think that some of the module-level proc macro awareness will help with logically structuring code (if people prefer that). I'm actually a fan of "magic", as long as the cognitive overhead involved is acknowledged and minimized, as much as possible. That being said, this is my opinion, and others may disagree. I want to restate my goal as supporting the use cases listed in #294 (comment), for:
In particular, the last item, library crate developers, are not a use case I have seen you address yet (though Jorge did hit on that topic in his response). It is likely that libraries will need to interact with thread-safe components, and having a way to "give" them those components at runtime, either through dependency injection, or other means. In particular, HAL crate developers may also have a want or need to provide interrupt handler routines, in essence "taking" or "borrowing" the interrupt resource and related data, in order to improve ergonomics for users of these crates. Again, I am very thankful for the existence of RTFM, and I don't aim to detract from what you have achieved. However as Rust is much more package based than C or C++, the crates in Rust need to "stand on their own", and be correct without depending on RTFM. This is the problem that I have faced as a maintainer of |
Hi
Just a short answer to 3, (library), as I wrote earlier.
One way to do this is to base library development on the assumption that the caller provides the resources (that be state or peripheral access) through parameters.
Using RTFM the programmer would bind the interrupt handler and from there call the library function.
If you want to bind an interrupt handler, and steal resources in a library (without using RTFM) this could be an "opt-in" feature of the library.
Important here is mainly that the library should not by itself (without opting in) do any wild Mutex accesses. That way the library would work with or without RTFM.
Best
Per
…________________________________
Från: James Munns <notifications@github.com>
Skickat: den 22 januari 2019 16:26:14
Till: rust-embedded/wg
Kopia: Per Lindgren; Mention
Ämne: Re: [rust-embedded/wg] Determine an idiomatic way of sharing or transferring resources between Interrupt and User contexts (#294)
@perlindgren<https://github.com/perlindgren> There is a lot to unpack there, and I hope to be able to give a longer answer later. I do appreciate the history and discussion of the design constraints you have worked with.
However, I want to reiterate that I am not against usage of proc macros, nor even the current syntax of RTFM4. I think that some of the module-level proc macro awareness will help with logically structuring code (if people prefer that). I'm actually a fan of "magic", as long as the cognitive overhead involved is acknowledged and minimized, as much as possible. That being said, this is my opinion, and others may disagree.
I want to restate my goal as supporting the use cases listed in #294 (comment)<#294 (comment)>, for:
* End users/applications using RTFM
* End users/applications NOT using RTFM
* Library crate developers, supporting applications which may or may not use RTFM
In particular, the last item, library crate developers, are not a use case I have seen you address yet (though Jorge did hit on that topic in his response). It is likely that libraries will need to interact with thread-safe components, and having a way to "give" them those components at runtime, either through dependency injection, or other means. In particular, HAL crate developers may also have a want or need to provide interrupt handler routines, in essence "taking" or "borrowing" the interrupt resource and related data, in order to improve ergonomics for users of these crates.
Again, I am very thankful for the existence of RTFM, and I don't aim to detract from what you have achieved. However as Rust is much more package based than C or C++, the crates in Rust need to "stand on their own", and be correct without depending on RTFM. This is the problem that I have faced as a maintainer of nrf52-hal, and trying to provide convenient and correct abstractions for all users of this library.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#294 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AD5naA6G43zgXk1-C6_oOCm8DdW1ggrFks5vFy2WgaJpZM4aBHzm>.
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/rust-embedded/wg","title":"rust-embedded/wg","subtitle":"GitHub repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open in GitHub","url":"https://github.com/rust-embedded/wg"}},"updates":{"snippets":[{"icon":"PERSON","message":"@jamesmunns in #294: @perlindgren There is a lot to unpack there, and I hope to be able to give a longer answer later. I do appreciate the history and discussion of the design constraints you have worked with.\r\n\r\nHowever, I want to reiterate that I am not against usage of proc macros, nor even the current syntax of RTFM4. I think that some of the module-level proc macro awareness will help with logically structuring code (if people prefer that). I'm actually a fan of \"magic\", as long as the cognitive overhead involved is acknowledged and minimized, as much as possible. That being said, this is my opinion, and others may disagree.\r\n\r\nI want to restate my goal as supporting the use cases listed in #294 (comment), for:\r\n\r\n* End users/applications using RTFM\r\n* End users/applications NOT using RTFM\r\n* Library crate developers, supporting applications which may or may not use RTFM\r\n\r\nIn particular, the last item, library crate developers, are not a use case I have seen you address yet (though Jorge did hit on that topic in his response). It is likely that libraries will need to interact with thread-safe components, and having a way to \"give\" them those components at runtime, either through dependency injection, or other means. In particular, HAL crate developers may also have a want or need to provide interrupt handler routines, in essence \"taking\" or \"borrowing\" the interrupt resource and related data, in order to improve ergonomics for users of these crates.\r\n\r\nAgain, I am very thankful for the existence of RTFM, and I don't aim to detract from what you have achieved. However as Rust is much more package based than C or C++, the crates in Rust need to \"stand on their own\", and be correct without depending on RTFM. This is the problem that I have faced as a maintainer of `nrf52-hal`, and trying to provide convenient and correct abstractions for all users of this library."}],"action":{"name":"View Issue","url":"#294 (comment)"}}} [ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "#294 (comment)", "url": "#294 (comment)", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]
|
In general, interrupts can have different priorities. Setting them all to the
Sure, but your comment refers to a solution to the 'share between main and ISR'
You can put the // same as before
#[interrupt]
fn USART0() {
let serial = SERIAL;
// do stuff with `serial`
}
#[interrupt] // this could be running a different priority (that would be UB)
fn USART1() {
let serial = SERIAL; // <- this should be a compile time error
// do stuff with `serial`
} Unless you (a) equalize all interrupt priorities after
static FOO: Mutex<RefCell<u64>> = Mutex::new(RefCell::new(0));
#[interrupt]
fn USART0() {
interrupt::free(|cs| {
let foo = FOO.borrow(cs);
let x = foo.borrow_mut();
bar();
let y = foo.borrow_mut(); // this panics
});
}
// "nobody writes code like that!", right?
// your collegue may write this in some other file / module though
fn bar() {
interrupt::free(|cs| {
let foo = FOO.borrow(cs);
let x = foo.borrow_mut(); // may panic
// ..
});
}
// Or yet another possibility
#[exception] // this won't be stopped by the critical section and can preempt USART0
fn NMI() { // this could be HardFault; same problem
interrupt::free(|cs| {
let foo = FOO.borrow(cs);
let x = foo.borrow_mut(); // this will panic if it preempts USART0
});
} Again, the root of the problem is the global
That's possible. You could either pre-allocate in excess for all handlers |
@japaric Ah ok, that makes sense in terms of the panics. With my wrapper I had equated both the 'this value hasn't been initialised' and 'you cannot get a borrow on this value at this time' to both return static FOO: Shared<u64> = Shared::new();
#[interrupt]
fn USART0() {
interrupt::free(|cs| {
if let Some(foo) = FOO.get(cs) {
/* do stuff with foo */
}
});
} |
I am a little confused, are we talking about an embedded generic solution, or are we talking about RTFM?
The OSEK/AUTOSAR OS solution for this is using priority ceiling, i.e. temporary raising the priority of the task/code accessing the shared resource to the highest level of the tasks/ISRs sharing that particular resource. Not sure how we can translate this to code without an OS and how we can make some Rustic implementations of GetResource/ReleaseResource which could actually be implemented once and reused to implement the priority ceiling protocol for an OS. My gut feeling is we should be able to use the type system somehow, but I think we will need to use some kind of locking mechanism (e.g. spinlock) to achieve run-time panic-free code. |
I disagree. A critical section is a reasonable way to ensure exclusive access to shared resources. It may not be the ideal way but that is a different topic.
Fair.
Absolutely. We already do this in the e.g. #[interrupt] and #[entry] macros, right? The main concern here to keep a familiar program structure.
There's nothing worse than deadlocks in embedded programming. Trading a panic for a deadlock is a horrible idea.
Agreed. |
All kind of solutions.
I think a safe API like RTFM uses the priority ceiling protocol (PCP) and exposes a safe API to access the underlying data, but this is only possible because the DSL enforces the (a) and (b) invariants at compile time.
I refer you to RTFMv1 for an old version that used the type system to track interrupt priorities and ceilings (i.e. BASEPRI). Not only was the API super unergonomic to use, but there were also problems with Rust aliasing model / borrow checker that forced you to use
I agree with this. My comment was specifically about disabling all interrupts to create a critical section. That mechanism also blocks higher priority task that don't share memory with the context that needs to access the shared memory -- that's what I was referring to as "a correctness issue". Other mechanisms to create critical sections like masking interrupts and raising the priority (see BASEPRI) don't have this issue (or minimize the issue). |
@HarkonenBade and I were talking a bit about this yesterday on IRC and came up
|
@japaric That looks great for the moving of resources into interrupt handlers. How would the sharing work? |
Currently it would use reference semantics, so things that only require |
I have high hopes for this as it feels like a very elegant syntax for doing bare bones interrupt interfacing in places where the rust compilers reference semantics are sufficient to solve any sharing concerns. |
Sharing (references) doesn't really work. That's why this is listed under 'moving from main to ISR'. You can use channels, though. It's more nuanced than that. Since we are placing the closure in a |
For non-RTFM use, when I started reading this thread I had hoped to find something like #[interrupt]
fn USART0(serial: Serial) {
// do stuff with `serial`
}
fn main() {
let serial = ...;
USART0.enable(serial);
// to take it back later:
let (serial, ) = USART0.disable().expect("Interrupt was not active");
} that could have no overhead at all in the interrupt, but that'd only be achievable if we could make sure that the interrupt never ever gets enabled without setting the (This case seems to be the most important of the use cases, as the data-flow cases seem to come naturally by passing one end of an SPSC into the interrupt). |
I assume you can put the entry of unsafe fn USART0() in the vector table, right? |
If I understand your idea correctly, that's not quite idiomatic and you are still in a situation where the developer might forget to enable the interrupt. It's better to have the enable be implicitly done at scope end, only make explicit the entry in the critical section; this would be similar in felling to how drop() happens. Also I don't consider the panic an option, better have a 0 cost abstraction or stick with C if we don't 🤪 |
I'd consider this a good thing: The function does not get called until something explicitly requests it to. (And otherwise, how can one hope to have as little error handling as possible run in the interrupt?) IMO the "interrupt local" variables should be valid whenn the interrupt is enabled. (Conversely, disabling the interrupt would (move out and) drop them, and never disabling keeps them forever owned by it).
I don't understand what you mean there; the intended workings of the I see, though, (from the "critical section if" part) that this is getting so close the "static mut _: impl Trait" version it (when thought through) probably winds up being the same, plus/minus whether there is a mutable closure or a function with its syntactic arguments turned into global statics by similar macros to what treats their statics now. |
That's what the That looks like a reasonable API, but its behavior in edge cases needs to
#[interrupt]
fn USART0(serial: &mut Serial) { // (argument needs to be `&mut _` to prevent cloning singletons)
// ..
}
#[entry]
fn main() -> ! {
let p = cortex_m::Peripherals::take().unwrap();
p.NVIC.enable(Interrupt::USART0);
// triggers `USART0`; UB since its state is not initialized
p.NVIC.pend(Interrupt::USART0);
let serial = ..;
USART0::enable(serial);
}
#[interrupt]
fn USART0(x: &mut u64) {
// ..
// UB if unchecked
USART::enable(..);
// ..
}
#[interrupt]
fn USART0(x: &mut u64) {
// ..
// preempted by EXTI0 at this point
// ..
}
// higher priority than `USART0`
#[interrupt]
fn EXTI0(nvic: &mut NVIC) {
NVIC.disable(Interrupt::USART0);
// UB if unchecked
USART::enable(..);
} |
The "What if it gets pended" would require all code paths that can lead to an I thought the "rewriting from handler itself" and "rewriting from higher priority interrupt" could go away if enable took a The updated example doesn't look half as nice as my original one (as the interrupt author can't differentiate between interface arguments and statics any more), but may be still worth considering: #[interrupt]
fn USART0() {
static serial: Option<Serial> = None;
static count: i32 = 0;
// ... as currently
}
// Not doing this in main to demonstrate it can return
fn setup(some_peripherals) {
let serial = some_peripherals.serial.take();
USART0::set(|s| s.serial(serial).count(99));
USART0::enable();
} (Whether the setters are grouped or not, or the enable is in there as well, is probably a matter of taste; some grouping does make sense as all those accesses incur the run-time criticial-section-plus-priority-checking cost that RTFM gives for free; the grouping reduces that and may be nice to read too.) The |
The other option is that we could restrict access to the NVIC if you were using this method of interrupt orchestration. As it would probably need to own at least a pointer to the NVIC anyway to handle enabling and disabling bits. |
ad "restrict access to the NVIC" That restriction would need to be build-time (eg. from a feature in cortex-m that disables access to it), as the vectors are in the global table from the beginning of execution, and saying "If you use this pretty please call the thing that consumes the NVIC before doing anything else" won't be enough to claim safety. Might be feasible, but I'd be unsure whether that'd composes well with HALs that just started migrating from |
More I'd be tempted to have some 'InterrruptController' that takes posession of the NVIC peripheral and then is used to enable/disable/pend interrupts. e.g. fn USART0(serial: &mut Serial) { // (argument needs to be `&mut _` to prevent cloning singletons)
// ..
}
#[entry]
fn main() -> ! {
let p = cortex_m::Peripherals::take().unwrap();
let ic = InterruptController::new(p.NVIC);
ic.pend(USART0); // Will return a failure as USART0 is not enabled.
let serial = ..;
ic.enable(USART0, (serial,));
ic.pend(USART0);
} |
@HarkonenBade I was actually thinking something similar. I think for safety it might require that we enforce ownership of the NVIC (so no NVIC::pend(), or at least make that unsafe), and maaaaaybe ownership of the Interrupts, though I'm not 100% on that. It would be nice to have InterruptController::enable() be failable, for the reasons you listed. I've also been thinking about how to reasonably statically-allocate space for the resources used by each interrupt, and how we could possibly avoid dynamic dispatch (e.g. every interrupt hits the Interrupt Controller, then it dispatches with the correct context info). |
This is now listed at https://github.com/rust-embedded/not-yet-awesome-embedded-rust#sharing-data-with-interrupts, I think we can close this issue here. |
I've recently published the |
See discussion on this thread: rust-embedded/bare-metal#15
Edit: This may end up being more than one solution, particularly "moving" vs "sharing", and for different levels of "sharing" guarantees".
The text was updated successfully, but these errors were encountered: