-
Notifications
You must be signed in to change notification settings - Fork 969
TinyGo and Security
These are a few ways that we could evaluate TinyGo from a secure computing perspective.
-
Trusted execution environment (TEE) - https://en.wikipedia.org/wiki/Trusted_execution_environment
-
Hardware security module (HSM) - https://en.wikipedia.org/wiki/Hardware_security_module
-
Host firmware in conventional rack-mounted servers (x86-64, ARM) - https://en.wikipedia.org/wiki/Open-source_firmware
-
Baseboard management controller (BMC) firmware - https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface#Baseboard_management_controller
-
Mixed-criticality systems - https://en.wikipedia.org/wiki/Mixed_criticality
-
Sandboxed WASM execution - https://webassembly.org/docs/security/
-
Where does TinyGo use unsafe? Could we reduce use of unsafe?
- We try to provide safe abstractions whenever possible.
-
What is the state of the TinyGo runtime? How likely is it that the memory allocator, GC, handling of stacks, or scheduler contains bugs that have a security impact?
- There are a number of things that are security sensitive:
- Stack overflows are currently unchecked and can lead to memory corruption (and therefore security issues). Fixing this is not easy, this will likely require LLVM support or hardware support (like a MPU or stack limit checking in Cortex-M33). On operating systems with threads support (
-scheduler=threads
), a stack overflow should reliably lead to a crash (segmentation fault) due to the guard page. - There could be as-of-yet unknown race conditions, especially in the multicore scheduler. This is difficult to test for.
- Memory management (GC / memory allocator, which is the same thing) might have bugs, but the built-in "blocks" allocator (especially the conservative one) is relatively simple and easy to audit. The "precise" GC has a bigger bug surface since the compiler needs to correctly mark which parts of an object are pointer-free.
- Stack overflows are currently unchecked and can lead to memory corruption (and therefore security issues). Fixing this is not easy, this will likely require LLVM support or hardware support (like a MPU or stack limit checking in Cortex-M33). On operating systems with threads support (
- There are a number of things that are security sensitive:
-
What is the state of the TinyGo compiler toolchain? Especially glue between the official Go compiler frontend, and the LLVM backend. A useful exercise would be to go through golang-announce and think about which ones of those vulnerabilities affected the toolchain and whether TinyGo could be affected.
-
What is the state of TinyGo supply chain security? How wide and deep is the dependency tree? Consider embedding dependency manifests like "go version -m".
-
Possible guidelines for ensuring a secure development life-cycle
-
Signed software releases, key generation, and key access/handling.
-
Bug bounty program
-
What does TinyGo do about side channels? Does it have a Spectre mode like big Go?
- Spectre:
- Microcontrollers: unlikely to be relevant, MCUs are very simple cores.
- WebAssembly: this should probably be the responsibility of the wasm runtime?
- Crypto side channels:
- There might be bugs here: we use the Go standard library software implementations and those will assume constant time operations as implemented in the Go compiler. LLVM is generally smarter and make some operations non-constant-time that were previously constant-time.
Needs more investigation.
- There might be bugs here: we use the Go standard library software implementations and those will assume constant time operations as implemented in the Go compiler. LLVM is generally smarter and make some operations non-constant-time that were previously constant-time.
- Spectre: