Skip to content

Commit

Permalink
todo-bot: touch TLS todos
Browse files Browse the repository at this point in the history
Because the TLS PR is so big, todobot bailed on it.

Make a PR that only touches every `// todo:` we opened in #339, so todobot picks them again.
  • Loading branch information
Orycterope committed Aug 12, 2019
1 parent 722e53f commit 7333eaf
Show file tree
Hide file tree
Showing 4 changed files with 69 additions and 69 deletions.
94 changes: 47 additions & 47 deletions kernel/src/i386/gdt.rs
Original file line number Diff line number Diff line change
Expand Up @@ -446,53 +446,53 @@ impl MainTask {
///
/// The only exception to this is double faulting, which does use Hardware Task Switching, and
/// for which we allocate a second TSS, see [DOUBLE_FAULT_TASK].
// todo: per-cpu TSSs / GDT
// body: There are multiple things that aren't ideal about the way we handle TSSs.
// body:
// body: ## Initialization
// body:
// body: TSSs must always be initialized with an iopb_offset of `size_of::<TSS>()`,
// body: so that the TSS's data is not interpreted as the iopb.
// body:
// body: However, because MAIN_TASK has a huge iopb (0x2001 bytes), we want it to live in the
// body: .bss, and be lazy initialized (iopb_offset value, and iopb array memset to 0xFF).
// body: `lazy_static` seems appropriate for that, and we should use it, so we cannot *forget* to
// body: initialize a TSS.
// body:
// body: DOUBLE_FAULT_TASK could be statically initialized, except for the `cr3` field.
// body:
// body: ## Per-cpu
// body:
// body: But we will likely want a MAIN and DOUBLE_FAULT TSS per core. However, they cannot trivially
// body: be put behind a `#[thread_local]`, as they are initialized with the GDT, before cpu-locals
// body: are initialized. It might be possible to make them `#[thread_local]` with some
// body: post-initialization routine that switches to using the MAIN and DOUBLE_FAULT_TASK in the
// body: cpu-local memory area instead of the static early one, after cpu-local have been initialized,
// body: for core 0.
// body: The static early one could do without an iopb, since we're not going to userspace with it.
// body:
// body: For other cores, having a `#[thead_local]` inside a `lazy_static!` seems to work, but I don't
// body: yet know how cores are going to be started, whether they allocate/initialize their own
// body: GDT + MAIN + DOUBLE_FAULT TSS, if it their parent core do it.
// body:
// body: Because of these unknowns, the search for a good design for TSSs/GDT is postponed.
// body:
// body: ## Locking
// body:
// body: Since the TSSs are supposed to be cpu-local, there is no reason for them to have a mutex
// body: around them. An ideal design would be lock-less, which can either be achieved with `#[thread_local]`,
// body: or some custom wrapper around an UnsafeCell just for TSSs.
// body:
// body: ## DOUBLE_FAULT's cr3
// body:
// body: The DOUBLE_FAULT TSS(s)'s cr3 must point to a valid page directory, which will remain valid
// body: (i.e. not be freed) for the entire lifetime of the kernel, and possibly updated when kernel
// body: page tables are modified.
// body:
// body: For now, because we have no such hierarchy, we always make DOUBLE_FAULT's cr3 point
// body: to the current cr3, and update it when we switch page table hierarchies. However the current
// body: way we do kernel paging is not viable for SMP, and we might finally implement such a hierarchy
// body: for SMP, we could then make DOUBLE_FAULT TSS(s) point to it.
// TODO: per-cpu TSSs / GDT
// BODY: There are multiple things that aren't ideal about the way we handle TSSs.
// BODY:
// BODY: ## Initialization
// BODY:
// BODY: TSSs must always be initialized with an iopb_offset of `size_of::<TSS>()`,
// BODY: so that the TSS's data is not interpreted as the iopb.
// BODY:
// BODY: However, because MAIN_TASK has a huge iopb (0x2001 bytes), we want it to live in the
// BODY: .bss, and be lazy initialized (iopb_offset value, and iopb array memset to 0xFF).
// BODY: `lazy_static` seems appropriate for that, and we should use it, so we cannot *forget* to
// BODY: initialize a TSS.
// BODY:
// BODY: DOUBLE_FAULT_TASK could be statically initialized, except for the `cr3` field.
// BODY:
// BODY: ## Per-cpu
// BODY:
// BODY: But we will likely want a MAIN and DOUBLE_FAULT TSS per core. However, they cannot trivially
// BODY: be put behind a `#[thread_local]`, as they are initialized with the GDT, before cpu-locals
// BODY: are initialized. It might be possible to make them `#[thread_local]` with some
// BODY: post-initialization routine that switches to using the MAIN and DOUBLE_FAULT_TASK in the
// BODY: cpu-local memory area instead of the static early one, after cpu-local have been initialized,
// BODY: for core 0.
// BODY: The static early one could do without an iopb, since we're not going to userspace with it.
// BODY:
// BODY: For other cores, having a `#[thead_local]` inside a `lazy_static!` seems to work, but I don't
// BODY: yet know how cores are going to be started, whether they allocate/initialize their own
// BODY: GDT + MAIN + DOUBLE_FAULT TSS, if it their parent core do it.
// BODY:
// BODY: Because of these unknowns, the search for a good design for TSSs/GDT is postponed.
// BODY:
// BODY: ## Locking
// BODY:
// BODY: Since the TSSs are supposed to be cpu-local, there is no reason for them to have a mutex
// BODY: around them. An ideal design would be lock-less, which can either be achieved with `#[thread_local]`,
// BODY: or some custom wrapper around an UnsafeCell just for TSSs.
// BODY:
// BODY: ## DOUBLE_FAULT's cr3
// BODY:
// BODY: The DOUBLE_FAULT TSS(s)'s cr3 must point to a valid page directory, which will remain valid
// BODY: (i.e. not be freed) for the entire lifetime of the kernel, and possibly updated when kernel
// BODY: page tables are modified.
// BODY:
// BODY: For now, because we have no such hierarchy, we always make DOUBLE_FAULT's cr3 point
// BODY: to the current cr3, and update it when we switch page table hierarchies. However the current
// BODY: way we do kernel paging is not viable for SMP, and we might finally implement such a hierarchy
// BODY: for SMP, we could then make DOUBLE_FAULT TSS(s) point to it.
pub static MAIN_TASK: Mutex<MainTask> = Mutex::new(MainTask::empty());

/// Double fault TSS
Expand Down
2 changes: 1 addition & 1 deletion kernel/src/paging/arch/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ pub use self::i386::table::{ActiveHierarchy, InactiveHierarchy};
pub use self::i386::entry::I386Entry as Entry;
pub use self::i386::entry::I386EntryFlags as EntryFlags;
pub use self::i386::is_paging_on;
pub use self::i386::{read_cr2, read_cr3}; // TODO: expose current page directory's address in an arch-independant way.
pub use self::i386::{read_cr2, read_cr3}; // todo: expose current page directory's address in an arch-independant way.
pub use self::i386::lands::{KernelLand, UserLand, RecursiveTablesLand};
16 changes: 8 additions & 8 deletions kernel/src/panic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ pub enum PanicOrigin<'a> {
/// Takes a panic origin, so we can personalize the kernel panic message.
pub fn kernel_panic(panic_origin: &PanicOrigin) -> ! {

// TODO: permanently_disable_interrupts shouldn't be unsafe.
// BODY: disabling interrupts doesn't break any safety guidelines, and is perfectly safe as far as rustc is concerned.
// todo: permanently_disable_interrupts shouldn't be unsafe.
// body: disabling interrupts doesn't break any safety guidelines, and is perfectly safe as far as rustc is concerned.
// Disable interrupts forever!
unsafe { sync::permanently_disable_interrupts(); }
// Don't deadlock in the logger
Expand Down Expand Up @@ -183,12 +183,12 @@ pub fn kernel_panic(panic_origin: &PanicOrigin) -> ! {
let _ = writeln!(SerialLogger, "Panic handler: Failed to get kernel elf symbols");
}

// todo: Kernel Stack dump update
// body: Update the kernel stack dump functions to be compatible the new and improved
// body: kernel panic.
// body:
// body: Now that know the origin (userspace or kernelspace) in the panic, this should
// body: be easy, and we can finally have userspace stack dumps that actually work.
// TODO: Kernel Stack dump update
// BODY: Update the kernel stack dump functions to be compatible the new and improved
// BODY: kernel panic.
// BODY:
// BODY: Now that know the origin (userspace or kernelspace) in the panic, this should
// BODY: be easy, and we can finally have userspace stack dumps that actually work.
let stackdump_source = None;

// Then print the stack
Expand Down
26 changes: 13 additions & 13 deletions libuser/src/threads.rs
Original file line number Diff line number Diff line change
Expand Up @@ -206,14 +206,14 @@ impl Thread {
/// Allocates the stack, sets up the context and TLS, and calls `svcCreateThread`.
///
/// [`start`]: Thread::start
// TODO: Libuser Thread stack guard
// BODY: Currently the stack of every non-main thread is allocated in the heap, and no page
// BODY: guard protects from stack-overflowing and rewriting all the heap.
// BODY:
// BODY: This is of course terrible for security, as with this stack overflowing is U.B.
// BODY:
// BODY: The simpler way to fix this would be to continue allocating the stack on the heap,
// BODY: but remap the last page with no permissions with the yet unimplemented svcMapMemory syscall.
// todo: Libuser Thread stack guard
// body: Currently the stack of every non-main thread is allocated in the heap, and no page
// body: guard protects from stack-overflowing and rewriting all the heap.
// body:
// body: This is of course terrible for security, as with this stack overflowing is U.B.
// body:
// body: The simpler way to fix this would be to continue allocating the stack on the heap,
// body: but remap the last page with no permissions with the yet unimplemented svcMapMemory syscall.
pub fn create(entry: fn (usize) -> (), arg: usize) -> Result<Self, Error> {

let tls_elf = Once::new();
Expand Down Expand Up @@ -294,11 +294,11 @@ extern "fastcall" fn thread_trampoline(thread_context_addr: usize) -> ! {

impl Drop for Thread {
fn drop(&mut self) {
// todo: Properly free resource after thread detach
// body: When detaching a thread, we should ensure that the associated resources (stack,
// body: handle, context, etc...) are properly freed before the Process exits. This can be
// body: done by adding the ThreadContext to a global Vec<> of ThreadContext that gets freed
// body: when the main thread (or the last thread alive?) exits.
// TODO: Properly free resource after thread detach
// BODY: When detaching a thread, we should ensure that the associated resources (stack,
// BODY: handle, context, etc...) are properly freed before the Process exits. This can be
// BODY: done by adding the ThreadContext to a global Vec<> of ThreadContext that gets freed
// BODY: when the main thread (or the last thread alive?) exits.
}
}

Expand Down

0 comments on commit 7333eaf

Please sign in to comment.