-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow specifying multiple interrupt handlers for the same IRQ number #143
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,48 +1,52 @@ | ||
//! Interrupt handling. | ||
|
||
use alloc::boxed::Box; | ||
use core::mem::MaybeUninit; | ||
use alloc::{boxed::Box, vec::Vec}; | ||
use kerla_runtime::{arch::enable_irq, spinlock::SpinLock}; | ||
use kerla_utils::bitmap::BitMap; | ||
|
||
use crate::{deferred_job::run_deferred_jobs, interval_work}; | ||
|
||
fn empty_irq_handler() {} | ||
|
||
type IrqHandler = dyn FnMut() + Send + Sync; | ||
const UNINITIALIZED_IRQ_HANDLER: MaybeUninit<Box<IrqHandler>> = MaybeUninit::uninit(); | ||
static IRQ_HANDLERS: SpinLock<[MaybeUninit<Box<IrqHandler>>; 256]> = | ||
SpinLock::new([UNINITIALIZED_IRQ_HANDLER; 256]); | ||
static ATTACHED_IRQS: SpinLock<BitMap<32 /* = 256 / 8 */>> = SpinLock::new(BitMap::zeroed()); | ||
const NUM_IRQ_NUMBERS: usize = 256; | ||
|
||
pub fn init() { | ||
let mut handlers = IRQ_HANDLERS.lock(); | ||
for handler in handlers.iter_mut() { | ||
handler.write(Box::new(empty_irq_handler)); | ||
} | ||
/// Holds the interrupt handlers. The index is the IRQ number and this vector | ||
/// contains `NUM_IRQ_NUMBERS` entries. | ||
static IRQ_VECTORS: SpinLock<Vec<IrqVector>> = SpinLock::new(Vec::new()); | ||
|
||
pub struct IrqVector { | ||
handlers: Vec<Box<IrqHandler>>, | ||
} | ||
|
||
pub fn attach_irq(irq: u8, f: Box<dyn FnMut() + Send + Sync + 'static>) { | ||
let mut attached_irq_map = ATTACHED_IRQS.lock(); | ||
match attached_irq_map.get(irq as usize) { | ||
Some(true) => panic!("handler for IRQ #{} is already attached", irq), | ||
Some(false) => { | ||
attached_irq_map.set(irq as usize); | ||
IRQ_HANDLERS.lock()[irq as usize].write(f); | ||
enable_irq(irq); | ||
impl IrqVector { | ||
pub fn new() -> IrqVector { | ||
IrqVector { | ||
handlers: Vec::new(), | ||
} | ||
None => panic!("IRQ #{} is out of bound", irq), | ||
} | ||
|
||
pub fn handlers_mut(&mut self) -> &mut [Box<IrqHandler>] { | ||
&mut self.handlers | ||
} | ||
|
||
pub fn add_handler(&mut self, f: Box<IrqHandler>) { | ||
self.handlers.push(f); | ||
} | ||
} | ||
|
||
pub fn attach_irq(irq: u8, f: Box<IrqHandler>) { | ||
debug_assert!((irq as usize) < NUM_IRQ_NUMBERS); | ||
IRQ_VECTORS.lock()[irq as usize].add_handler(f); | ||
enable_irq(irq); | ||
} | ||
|
||
pub fn handle_irq(irq: u8) { | ||
{ | ||
let handler = &mut IRQ_HANDLERS.lock()[irq as usize]; | ||
unsafe { | ||
(*handler.assume_init_mut())(); | ||
debug_assert!((irq as usize) < NUM_IRQ_NUMBERS); | ||
let mut vectors = IRQ_VECTORS.lock(); | ||
for handler in vectors[irq as usize].handlers_mut() { | ||
handler(); | ||
} | ||
Comment on lines
+43
to
47
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The alternative instead of holding the vector is to have a type held statically with a link to next element and the closure, that would let avoiding reallocation of vector for every new handler. Not very Rusty approach, unless there's idiomatic way for intrusive lists. |
||
|
||
// `handler` is dropped here to release IRQ_HANDLERS's lock since | ||
// `vectors` is dropped here to release IRQ_HANDLERS's lock since | ||
// we re-enable interrupts just before running deferred jobs. | ||
} | ||
|
||
|
@@ -53,3 +57,10 @@ pub fn handle_irq(irq: u8) { | |
interval_work(); | ||
run_deferred_jobs(); | ||
} | ||
|
||
pub fn init() { | ||
let mut vectors = IRQ_VECTORS.lock(); | ||
for _ in 0..NUM_IRQ_NUMBERS { | ||
vectors.push(IrqVector::new()); | ||
} | ||
Comment on lines
+61
to
+65
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That itself is going to cause at least 8 reallocations as you haven't reserved the size of the vector in the first place. |
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to be vector of vectors and is going to hurt the cache badly. I'd suggest, leaving array:
so at least you'd have static array in fixed memory location without relocations on new base interrupt vector, @nuta.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds a sort of premature optimization. I do know it's not good for performance, but I don't want to consume too much time for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't call care for cache coherency in OS a premature optimisation, while Vector of Vector is well known for being bad at it. But, it's your call, @nuta.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover, original Linux Kernel uses a list for shared interrupts,¹. The nice summary is here:
https://unix.stackexchange.com/questions/47306/how-does-the-linux-kernel-handle-shared-irqs
Then I wonder if it would be better to use IDT on x86_64 to put some handlers directly there, but that might be hard in Rust without some assembler trickery.
¹ not all share; all drivers using shared interrupt have to declare using shared interrupts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should not care until we actually need it. It's fast enough for me and I'd focus on adding more new feature at this early stage.
That said, I don't mean to give up the performance, in the safe way. Let's gradually consider the good approach to improve the performance by writing carefully written data structures in Rust 😃