Skip to content

Minutes 31 Oct 2024

Paul Albertella edited this page Nov 7, 2024 · 1 revision

Host: Paul Albertella

Participants: Sebastian Hetze, Igor Stoppa

Agenda

Can we document a more generally-applicable process and guidance / criteria for making claims about Linux in a safety context as part of ELISA?

If we make technical claims about the capabilities of Linux, or the risks involved in using Linux, then we need to provide verifiable evidence to substantiate this.

Verifiable meaning that a third party can reproduce your results following your instructions and the referenced or provided sources.

Framing how we approach safety analysis for Linux: we need to focus on the specific functions we care about, how they can be compromised and how we can either prevent or detect that. The issue we have to deal with in the context of an OS is the possibility of common cause failures: i.e. the fault that we are trying to prevent or detect can compromise the functionality that we are relying on to accomplish this.

Fault injection as a way of gaining some insight into the impact of a given fault scenario may help us to evaluate

  • the severity of the risks from the perspective of the specific functionality / capability that we care about.
  • the detectability of the consequences or the fault event itself

This may give us an alternative to systematic code- and design-level analysis of the kernel and other OS components. We qualify the analysis by documenting the fault scenarios that we have considered and the configurations and wider system integration decisions we have made as part of our verification approach.

Move the conversation to a more objective (or at least independently verifiable) framing. The test cases and fault injections can be a resource that system integrators can use to evaluate or verify their systems.

Opening the ‘black box’ means that we can target fault injections where necessary, but we would ideally want these to be as reusable as possible.

Clone this wiki locally