-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instant::duration_since
returns a shortened duration if a suspend occurred
#87906
Comments
We currently use |
Note that on linux getting time usually does not involve a system call overhead at all and is dispatched via vdso instead, basically a function call.
That argument doesn't sound strong. If you sleep for a minute, suspend, wake up and have to wait another 30 seconds for the next service update that doesn't seem so bad? And
So if you expect any relation to wall-time then I don't see a big technical issue with switching to |
SystemTime is not suitable for measuring time, it is allowed to get synchronized and thus can jump around a lot. If the syscall cost is too much then maybe there can be an additional less expensive version of Instant::now? Otherwise I don't see that as much of a good reason not to correctly handle the OS being suspended. While there's not a lot of guarantees about the behavior of Instant, it is the de facto way to measure time in Rust. And while Rust may not be able to guarantee much, it should still strive to keep it as accurate as possible if the OS indeed has more accurate APIs it can use. |
At least on current x86_64 linux with a TSC clock source neither Historically windows APIs or CPUs with unreliable TSCs were the ones that incurred higher overheads If you mean adding a new API that includes times including suspends then yes, that would be nice, but currently that's not portable to all supported systems as discussed in #87907
Well,
The more reliable it becomes on some systems the more people start relying on implementation details which then break when the program is shipped to a system where those details don't hold. That's a portability hazard. API contracts exist for a reason. |
On systems with VDSO I still don't think this will be a meaningful overhead. If you think it might be too much we can try benchmarking it.
What we have right now is a portability hazard. The Windows I'd argue:
|
I wrote neither [...] nor [...] will incur syscall overhead. That's a negation, in other words I already called it low-overhead. Where is the disagreement?
I disagree, I see it as the difference between an easy to notice stumbling block that will be taken into account by major libraries because they encounter it on some major platforms vs. a subtle issue which will cause breakage on less popular platforms because major libraries didn't consider the case since they were developed on the major platforms. Being loudly inconsistent means nobody can rely on that particular aspect. Being mostly consistent means people will rely on it.
The question is would that help programs? Do we have any particular error reports that would be fixed by using boot time? How do the current workarounds look like? |
No disagreement, sorry, I misread your statement.
Here's a workaround for using If |
@rustbot claim |
I can see a problem with picking either solution and not indicating clearly which one it is. For example you wouldn't possibly want a benchmarking tool after a couple of days of sleep to say that maximum/p99/etc iteration took… days. Similar logic can be applied to e.g. games, physics engines, etc. On the other hand I also see use-cases for the time that does elapse in suspend (as described in the issue report).
|
Posting in the interest of centralizing discussion. To summarize #87907: it was uncertain whether Linux's As a result, there was an inclination to use
For #88714, I will quote @CryZe:
And @thomcc says:
However, we will want to fix the Unix Epoch problem for Apple platforms eventually, as time is quite literally running out (even if we still have some), and the |
For what it's worth, the API
64-bit nanoseconds won't run out for an astronomically long time, so this is fine. It also isn't a benefit over This is speculation, but I'm pretty sure1 the only reason Regardless, I don't think we'd have to worry about the Unix epoch problem for Instant, since detecting the wrap is completely doable (we'd just have to assume deltas > half the type range are from wraparound when converting instants to Durations, not that we need to do this). [1]: Concretely, Apple recently used it in some extremely shiny new timing-related APIs they're pushing. The Anyway, the developer.apple.com website said something like this before about So, I think it's just discouraged for users who don't have a reason to use the that API, but I don't think that it has been deprecated. |
Ah, I see then. For some reason I had concluded the margin was much narrower there when I made my first attempt to understand the time code. Well, that is perfectly understandable and not at all confusing. Ahem. It looks like "mach ticks" may also be related to the kernel's notion of It seems best for Instant to use |
Can we have two clock types? One measures duration that includes the suspension time, while the other one does not. These two clock types should behave consistently across different platforms. Also, if the host OS does not support a certain clock type, we can return an error at runtime. |
Yeah that seems to be the ideal solution. |
That is, indeed, likely going to be the case. Most of the question that this issue, and the PRs branching off of it, pertains to is which And we can certainly attempt to statically guarantee things like "what does the host support?", as we know what the host is at compile time. |
Discussion is ongoing upstream at rust-lang/rust#87906 about how exactly to make a change like this, but most of the concerns (backwards compatibility, legacy kernels prior to 2.6.39, non-Linux OSes) are not applicable to us. If upstream decideds to keep `Instant` as `CLOCK_MONOTONIC`, we should figure out an alternate solution for our use of `quiche` before backing this out. Bug: 200694560 Change-Id: I188ad48bbaccd2fb0196e8273b307a7bbfc737f1
FWIW right now the current situation actually seems to be consistent between Linux and macOS -- both use clocks that do not advance when the system is asleep. (Based on this for macOS, where we are using CLOCK_UPTIME_RAW or mach_absolute_time depending on the architecture, but both seem to behave the same.) So if this is still accurate, only Windows uses a clock that keeps going during system sleep. #88714 would switch Linux over, leaving macOS (and of course all the other targets...) (I am bringing this up because the description in #88714 claims that Linux is the odd one out, but I don't see that supported by the docs.) |
Discussion is ongoing upstream at rust-lang/rust#87906 about how exactly to make a change like this, but most of the concerns (backwards compatibility, legacy kernels prior to 2.6.39, non-Linux OSes) are not applicable to us. If upstream decides to keep `Instant` as `CLOCK_MONOTONIC`, we should figure out an alternate solution for our use of `quiche` before backing this out. Bug: 200694560 Change-Id: I3ad265f01064f14cddac815117406c5941a3b2ef Merged-In: I8f7cd025c1ef9e1c7e89163d7d73220181b748ca
So I just read through most of #87907 and #88714, and it seems like there's some consensus that a "monotonic clock that includes time suspended" and a "monotonic clock that does not include time suspended" have individual usecases. But I couldn't find a conclusive statement on what steps could be taken to support both in I did a quick sweep of the platforms that have been previously mentioned. Windows is the only one missing an easy choice for "monotonic time not including time suspended", making it the odd one out among Tier 1 platforms. FreeBSD is interesting because it seemingly has three names for the same clock.
Note: The If Rust can eventually bump the minimum macOS version from 10.7 to 10.12, then Windows would be the only Tier 1 holdout. I figure bumping the minimum from Windows 7 to Windows 10 is pretty much a no-go. However, I did come across an alternative in python-trio/trio#1586 that sounds promising and wouldn't force a minimum version bump. Apparently, Windows has a What solution would the Rust team like to see? Can we just make |
One option is to make it generic with a default Then the default behavior could be left unspecified, whatever is available, and users who have more specific needs could try to obtain more specific clocks. |
Hmm, I hadn't thought of that one. I don't know if there are other clock sources that need to be supported, but I suppose there may be more platforms like FreeBSD that already can't do both. Overall, I don't really have an opinion on the API. I mostly suggested a new type because Tangentially-related, I think I'll try to confirm if the Windows workaround actually works in the meantime. (edit: I tried the workaround in Rust and it does seem to remove the time suspended, so that's a nice fallback solution for Windows 7 and 8.) |
I've published boot-time crate that provides |
…rk-Simulacrum Document that Instant may or may not include system-suspend time Since people are still occasionally surprised by this let's make it more explicit. This doesn't add any new guarantees, only documents the status quo. Related issues: rust-lang#87906 rust-lang#79462
I just noticed that the minimum supported macOS version was raised to 10.12 and the minimum supported Windows version will be raised to Windows 10 in the upcoming 1.78 release, so all of the Tier 1 platforms will have "suspend-aware" and "not suspend-aware" monotonic clocks. Is there any other major hurdle (aside from RFC process) before we could add support for both types in |
Freebsd still does not account for suspend. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236702 The other question is that even if all systems have some clock that can account for suspend whether those clocks can be made consistent with sleep/wait APIs that take deadlines. #113752 |
It seems the intent of the current Adding support for measuring time where system suspends do count as elapsed time would be a new API, at that point one would probably want to talk about APIs that support choosing a clock. That can be explored as a regular crate before starting discussions with t-libs-api about including it in std. The barrier for std inclusion is pretty high these days (higher than it was in Rust 1.0). |
There is a grand total of 3 tier 1 OSes, so I think looking at "the majority" is a weak argument. I also don't think we should be promising anything about the clock If we want to provide guaranteed clocks then I agree that should be a new API. E.g. allowing specifying the clock with |
It occurs to me that while both Linux kernel and Go tried and failed to move their default clocks to boot time, their clock objects are both more powerful than what The only thing It's arguable that because subtracting two Instants gets you a Duration, any operation which takes a Duration should behave in the same way as Instant wrt to ticking during suspend. Changing that behavior would likely run into the aforementioned compatibility issues. I don't think I buy that, however: We can simply think of Duration as a difference between two instants in some monotonic clock. It strikes me as very rare, and probably a bug, that a duration used for sleeps would be the same duration used for arithmetic on a std::time::Instant – if you have a counterexample, please provide it!
It seems difficult to provide guarantees like this across all platforms, but I think we should try to align the defaults as well as we can with user expectations and correct program behavior. tokio-rs/tokio#3185 and #71860 demonstrate some cases where we have not done that. |
One instance that comes to mind of the pattern you're asking for is fixed rate iteration. You take an instant, do some compute, take another instant, compute the elapsed time, then sleep for the desired period minus the elapsed time. Then duration calculation from instants factors directly into sleep time. |
If a platform supports absolute timeouts (like Linux+iouring) then it's possible not to calculate elapsed time. Program calculates the next absolute timestamp using current time and gets its ceil with the desired granularity period. Absolute timestamps are more convenient in case there are several timeout sources and program waits for the nearest one. Instead of sleeps with relative duration it's possible to provide more advanced API with relative or absolute timeouts, choice of clocks (in the best effort manner). Previously I've experimented with completion-based IO driver with relative timeouts. On Linux the driver uses On Windows and BSD-like systems it maintains a timer wheel, calculates timeout duration and explicitly provides it in system calls. |
In case anybody is looking for suspend-aware (on supported platforms) |
Work pacing that mixes active (measured via instant differences) and forced sleep times and doesn't do a separate measurement across the sleep to ensure both are from the same clock source. |
Instant::now
currently usesCLOCK_MONOTONIC
, which stops ticking during suspend. This means that creating aDuration
viaInstant::duration_since
will be notably shorter than expected if the system suspended between the&self
instance and theearlier
argument.If
CLOCK_BOOTTIME
is used instead, the clock is guaranteed to continue ticking during suspend. This closer matches the documentation and real world usage ofInstant
. The documentation forInstant::duration_since
reads "Returns the amount of time elapsed from another instant to this one." - no mention of an exception for suspend or other low power states.tokio
usesInstant
to govern its sleep and timeout functionality. Given thattokio
is commonly used to implement network servers and clients, slicing out suspended time was likely not their intention.Using
CLOCK_BOOTTIME
should have minimal impact on existing usage. Performance-wise,CLOCK_BOOTTIME
is implemented in Linux by getting theCLOCK_MONOTONIC
time and adding an offset to it. That offset is updated as part of exiting suspend. As a result, the total cost should be an extra two loads , two adds, and maybe two stores with bad optimization, which are all negligible compared to the cost of servicing a system call. In terms of existing applications, the only ones which will see new behavior is ones which are measuring time across suspends. The only use case I can think of for wanting the oldCLOCK_MONOTONIC
behavior is time-multiplexing system resources, but I am not aware of an actual user.Other projects on the same issue
CLOCK_BOOTTIME
due toCLOCK_MONOTONIC
to act likeCLOCK_BOOTTIME
since it was almost always the desired behavior, and technically spec compatible, but reverted it shortly afterwards due to compatibility issues.The text was updated successfully, but these errors were encountered: