-
Notifications
You must be signed in to change notification settings - Fork 13.4k
linux/aarch64 Now() should be actually_monotonic() #88652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -263,6 +263,20 @@ impl Instant { | |
// | ||
// To hopefully mitigate the impact of this, a few platforms are | ||
// excluded as "these at least haven't gone backwards yet". | ||
// | ||
// While issues have been seen on arm64 platforms the Arm architecture | ||
// requires that the counter monotonically increases and that it must | ||
// provide a uniform view of system time (e.g. it must not be possible | ||
// for a core to recieve a message from another core with a time stamp | ||
// and observe time going backwards (ARM DDI 0487G.b D11.1.2). While | ||
// there have been a few 64bit SoCs that have bugs which cause time to | ||
// not monoticially increase, these have been fixed in the Linux kernel | ||
// and we shouldn't penalize all Arm SoCs for those who refuse to | ||
// update their kernels: | ||
// SUN50I_ERRATUM_UNKNOWN1 - Allwinner A64 / Pine A64 - fixed in 5.1 | ||
// FSL_ERRATUM_A008585 - Freescale LS2080A/LS1043A - fixed in 4.10 | ||
// HISILICON_ERRATUM_161010101 - Hisilicon 1610 - fixed in 4.11 | ||
// ARM64_ERRATUM_858921 - Cortex A73 - fixed in 4.12 | ||
Comment on lines
+276
to
+279
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On our platform support page we specifically name Linux 4.2 as the supported kernel. These fixes were all in later versions. @rust-lang/libs-api How do we feel about merging this? It'd mean that running the code on an older (but supported) kernel version might give an unexpected panic where we'd have to tell the user to update their kernel (or downgrade their Rust). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While these issues were fixed in the kernel versions I mention above, the patches were also adopted into earlier stable kernels. Assuming they're on a maintained branch of a stable kernel they should have the fixes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Documenting the backports too might help. At least we can then point users encountering the error on old distros to patched versions and show that this is considered a kernel bug and there is something they can update to. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can dig them up, but fundamentally the kernel should guarantee this is fixed. Taking it to an extreme, if a kernel+driver occasionally corrupted packets, I don't think anyone would suggest that rust's stdlib should wrap all data a user sends in an checksummed container to work around it. The answer there and the answer here should be please switch to a kernel that includes the fix. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I totally agree, but the workaround exists and if we take it back and some strange system out there starts breaking it would be helpful to be able to point out that this is an already fixed kernel bug or at least that the linux devs generally see this as their turf. The strategic problem here is that that the fix is "easy" (if done badly...), the downsides of a fix are only noticed by a few (what's a few nanoseconds here and there?), opening a github issue is lower-friction than joining a mailing list and the libs team is excessively nice about these things because the panics may be encountered by users rather than developers in a position to fix them. Anything that'll make future bug reports easier to handle will be helpful. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Understood... I've gone digging for backports and here is what I've found. Another approach which I'm a little hesitant to suggest would be to make In terms of backports i can find: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
On x86 the server situation isn't great due to some hypervisors introducing non-monotonoicity in the timestamp counters which should provide monotonic results. At least KVM even has an explicit flag where the host would promise that TSC is reliable and even in those cases there still seem to be a few systems out there that make and then break the promise. Is the ARM situation different, i.e. is the time source immune to hypervisor interference?
I wouldn't bet on it. Some reporters say actually observing panics due to time going backwards low occurrence rate on their systems. A few times a month or something like that. Applications rarely consist of hot loops doing nothing but calling There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It's possible to for the hypervisor to trap it, but at least KVM doesn't. The value is read from the hardware and then offset by a value originally programmed by the hyperivsor for that VM. If the hardware is functional the VM would have to work to break it.
I'm not specifically talking about Rust, but other applications where the time going backwards of forwards would result in certificate errors, and consensus algorithms going awry, etc. |
||
if time::Instant::actually_monotonic() { | ||
return Instant(os_now); | ||
} | ||
|
Uh oh!
There was an error while loading. Please reload this page.