-
Notifications
You must be signed in to change notification settings - Fork 152
-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wear damage using fsync? #557
Comments
Hey, I'm not entirely sure about the impact on wear damage, as I haven't looked into it in depth. This is an optional feature. Regarding fsync, it’s only called when there are actual writes to the file. If there’s no data to write, we skip it. You can check the implementation here: FileSink: Line 164 |
I’ve added an optional feature that allows setting a minimum interval for You can enable this feature as shown below. It will be available in the next release: auto file_sink = quill::Frontend::create_or_get_sink<quill::FileSink>(
"example_file_logging.log",
[]()
{
quill::FileSinkConfig cfg;
cfg.set_minimum_fsync_interval(std::chrono::seconds{1});
cfg.set_fsync_enabled(true);
return cfg;
}()
); |
I’ve successfully integrated the Quill library into our software stack, and the initial results are promising. I've enabled the fsync option on all file sinks, which, as I understand it, instructs the OS to request a flush to non-volatile storage. This seems like a sensible precaution to ensure that data logs aren't lost in the event of a sudden power failure.
However, based on my review of the source code, it appears that fsync is called on all active sinks at intervals of 500ns (the default backend sleep time) when the backend thread has no active tasks.
My question is: could these frequent fsync calls lead to increased disk wear, or is it safe because fsync only forces a flush when there’s actually data to write? My knowledge in this area is limited so I thought maybe you had some inputs.
The text was updated successfully, but these errors were encountered: