-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core.fsyncmethod: add 'batch' mode for faster fsyncing of multiple objects #1134
Conversation
3acb31f
to
feb5d62
Compare
32d4a25
to
2b8fcfc
Compare
Please change the PR title and description before sending 😄 |
65a33a8
to
0c3ac14
Compare
The pull request has 390 commits. The max allowed is 30. Please split the patch series into multiple pull requests. Also consider squashing related commits. |
4f478fb
to
876741f
Compare
/submit |
Submitted as pull.1134.git.1647379859.gitgitgadget@gmail.com To fetch this version into
To fetch this version to local tag
|
@@ -10,9 +10,9 @@ | |||
#include "packfile.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Junio C Hamano wrote (reply to this):
"Neeraj Singh via GitGitGadget" <gitgitgadget@gmail.com> writes:
> From: Neeraj Singh <neerajsi@microsoft.com>
>
> Preparation for adding bulk-fsync to the bulk-checkin.c infrastructure.
>
> * Rename 'state' variable to 'bulk_checkin_state', since we will later
> be adding 'bulk_fsync_objdir'. This also makes the variable easier to
> find in the debugger, since the name is more unique.
>
> * Move the 'plugged' data member of 'bulk_checkin_state' into a separate
> static variable. Doing this avoids resetting the variable in
> finish_bulk_checkin when zeroing the 'bulk_checkin_state'. As-is, we
> seem to unintentionally disable the plugging functionality the first
> time a new packfile must be created due to packfile size limits. While
> disabling the plugging state only results in suboptimal behavior for
> the current code, it would be fatal for the bulk-fsync functionality
> later in this patch series.
Sorry, but I am confused. The bulk-checkin infrastructure is there
so that we can send many little objects into a single packfile
instead of creating many little loose object files. Everything we
throw at object-file.c::index_stream() will be concatenated into the
single packfile while we are "plugged" until we get "unplugged".
My understanding of what you are doing in this series is to still
create many little loose object files, but avoid the overhead of
having to fsync them individually. And I am not sure how well the
original idea behind the bulk-checkin infrastructure to avoid
overhead of having to create many loose objects by creating a single
packfile (and presumably having to fsync at the end, but that is
just a single .pack file) with your goal of still creating many
loose object files but synching them more efficiently.
Is it just the new feature is piggybacking on the existing bulk
checkin infrastructure, even though these two have nothing in
common?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Neeraj Singh wrote (reply to this):
On Tue, Mar 15, 2022 at 10:33 PM Junio C Hamano <gitster@pobox.com> wrote:
>
> "Neeraj Singh via GitGitGadget" <gitgitgadget@gmail.com> writes:
>
> > From: Neeraj Singh <neerajsi@microsoft.com>
> >
> > Preparation for adding bulk-fsync to the bulk-checkin.c infrastructure.
> >
> > * Rename 'state' variable to 'bulk_checkin_state', since we will later
> > be adding 'bulk_fsync_objdir'. This also makes the variable easier to
> > find in the debugger, since the name is more unique.
> >
> > * Move the 'plugged' data member of 'bulk_checkin_state' into a separate
> > static variable. Doing this avoids resetting the variable in
> > finish_bulk_checkin when zeroing the 'bulk_checkin_state'. As-is, we
> > seem to unintentionally disable the plugging functionality the first
> > time a new packfile must be created due to packfile size limits. While
> > disabling the plugging state only results in suboptimal behavior for
> > the current code, it would be fatal for the bulk-fsync functionality
> > later in this patch series.
>
> Sorry, but I am confused. The bulk-checkin infrastructure is there
> so that we can send many little objects into a single packfile
> instead of creating many little loose object files. Everything we
> throw at object-file.c::index_stream() will be concatenated into the
> single packfile while we are "plugged" until we get "unplugged".
>
I noticed that you invented bulk-checkin back in 2011, but I don't think your
description matches what the code actually does. index_bulk_checkin
is only called from index_stream, which is only called from index_fd. index_fd
goes down the index_bulk_checkin path for large files (512MB by default). It
looks like the effect of the 'plug/unplug' code is to allow multiple
large blobs to
go into a single packfile rather than each getting one getting its own separate
packfile.
> My understanding of what you are doing in this series is to still
> create many little loose object files, but avoid the overhead of
> having to fsync them individually. And I am not sure how well the
> original idea behind the bulk-checkin infrastructure to avoid
> overhead of having to create many loose objects by creating a single
> packfile (and presumably having to fsync at the end, but that is
> just a single .pack file) with your goal of still creating many
> loose object files but synching them more efficiently.
>
> Is it just the new feature is piggybacking on the existing bulk
> checkin infrastructure, even though these two have nothing in
> common?
>
I think my new usage is congruent with the existing API, which seems
to be about combining multiple add operations into a large transaction,
where we can do some cleanup operations once we're finished. In the
preexisting code, the transaction is about adding a bunch of large objects
to a single pack file (while leaving small objects loose), and then completing
the packfile when the adds are finished.
---
On a side note, I've also been thinking about how we could use a packfile
approach as an alternative means to achieve faster addition of many small
objects. It's essentially what you stated above, where we'd send our
little objects
into a pack file. But to avoid frequent repacking overhead, we might
want to reuse
the 'latest' packfile across multiple Git invocations by appending
objects to it, with
an fsync on the file at the end.
We'd need sufficient padding between objects created by different Git
invocations to
ensure that previously synced data doesn't get disturbed by later
operations. We'd
need to rewrite the pack indexes each time, but that's at least
derived metadata, so it
doesn't need to be fsynced. To make the pack indexes more
incrementally-updatable,
we might want to have the fanout table be checksummed, with
checksummed pointers to
leaf blocks. If we detect corruption during an index lookup, we could
recreate the index
from the packfile.
Essentially the above proposal is to move away from storing loose
objects in the filesystem
and instead to index the data within Git itself.
Thanks,
Neeraj
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Junio C Hamano wrote (reply to this):
Neeraj Singh <nksingh85@gmail.com> writes:
> I think my new usage is congruent with the existing API, which seems
> to be about combining multiple add operations into a large transaction,
> where we can do some cleanup operations once we're finished. In the
> preexisting code, the transaction is about adding a bunch of large objects
> to a single pack file (while leaving small objects loose), and then completing
> the packfile when the adds are finished.
OK, so it was part me, and part a suboptimal presentation, I guess
;-)
Let me rephrase the idea to see if I got it right this time.
The bulk-checkin API has two interesting entry points, "plug" that
signals that we are about to repeat possibly many operations to add
new objects to the object store, and "unplug" that signals that we
are done such adding. They are meant to serve as a hint for the
object layer to optimize its operation.
So far the only way the hint was used was that the logic that sends
an overly large object into a packfile (instead of storing it loose,
which leaves it subject to expensive repacking later) can shove more
than one such objects in the same packfile.
This series invents another use of the "plug"-"unplug" hint. By
knowing that many loose object files are created and when the series
of object creation ended, we can avoid having to fsync each and
every one of them on certain filesystems and achieve the same
robustness. The new "batch" option to core.fsyncmethod triggers
this mechanism.
Did I get it right, more-or-less?
Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Neeraj Singh wrote (reply to this):
On Wed, Mar 16, 2022 at 9:14 AM Junio C Hamano <gitster@pobox.com> wrote:
>
> Neeraj Singh <nksingh85@gmail.com> writes:
>
> > I think my new usage is congruent with the existing API, which seems
> > to be about combining multiple add operations into a large transaction,
> > where we can do some cleanup operations once we're finished. In the
> > preexisting code, the transaction is about adding a bunch of large objects
> > to a single pack file (while leaving small objects loose), and then completing
> > the packfile when the adds are finished.
>
> OK, so it was part me, and part a suboptimal presentation, I guess
> ;-)
>
> Let me rephrase the idea to see if I got it right this time.
>
> The bulk-checkin API has two interesting entry points, "plug" that
> signals that we are about to repeat possibly many operations to add
> new objects to the object store, and "unplug" that signals that we
> are done such adding. They are meant to serve as a hint for the
> object layer to optimize its operation.
>
> So far the only way the hint was used was that the logic that sends
> an overly large object into a packfile (instead of storing it loose,
> which leaves it subject to expensive repacking later) can shove more
> than one such objects in the same packfile.
>
> This series invents another use of the "plug"-"unplug" hint. By
> knowing that many loose object files are created and when the series
> of object creation ended, we can avoid having to fsync each and
> every one of them on certain filesystems and achieve the same
> robustness. The new "batch" option to core.fsyncmethod triggers
> this mechanism.
>
> Did I get it right, more-or-less?
Yes, that's my understanding as well.
Thanks,
Neeraj
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Junio C Hamano wrote (reply to this):
Neeraj Singh <nksingh85@gmail.com> writes:
>> Did I get it right, more-or-less?
>
> Yes, that's my understanding as well.
I guess what I wrote would make a useful material for early part of
the log message to help future developers.
Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Neeraj Singh wrote (reply to this):
On Wed, Mar 16, 2022 at 11:10 AM Junio C Hamano <gitster@pobox.com> wrote:
>
> Neeraj Singh <nksingh85@gmail.com> writes:
>
> >> Did I get it right, more-or-less?
> >
> > Yes, that's my understanding as well.
>
> I guess what I wrote would make a useful material for early part of
> the log message to help future developers.
>
> Thanks.
Will do. I changed the commit message to explain the current
functionality of bulk-checkin and how it's similar to batched-fsync.
Documentation/config/core.txt
Outdated
@@ -62,22 +62,54 @@ core.protectNTFS:: | |||
Defaults to `true` on Windows, and `false` elsewhere. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Patrick Steinhardt wrote (reply to this):
On Tue, Mar 15, 2022 at 09:30:54PM +0000, Neeraj Singh via GitGitGadget wrote:
> From: Neeraj Singh <neerajsi@microsoft.com>
>
> When adding many objects to a repo with `core.fsync=loose-object`,
> the cost of fsync'ing each object file can become prohibitive.
>
> One major source of the cost of fsync is the implied flush of the
> hardware writeback cache within the disk drive. This commit introduces
> a new `core.fsyncMethod=batch` option that batches up hardware flushes.
> It hooks into the bulk-checkin plugging and unplugging functionality,
> takes advantage of tmp-objdir, and uses the writeout-only support code.
>
> When the new mode is enabled, we do the following for each new object:
> 1. Create the object in a tmp-objdir.
> 2. Issue a pagecache writeback request and wait for it to complete.
>
> At the end of the entire transaction when unplugging bulk checkin:
> 1. Issue an fsync against a dummy file to flush the hardware writeback
> cache, which should by now have seen the tmp-objdir writes.
> 2. Rename all of the tmp-objdir files to their final names.
> 3. When updating the index and/or refs, we assume that Git will issue
> another fsync internal to that operation. This is not the default
> today, but the user now has the option of syncing the index and there
> is a separate patch series to implement syncing of refs.
>
> On a filesystem with a singular journal that is updated during name
> operations (e.g. create, link, rename, etc), such as NTFS, HFS+, or XFS
> we would expect the fsync to trigger a journal writeout so that this
> sequence is enough to ensure that the user's data is durable by the time
> the git command returns.
>
> Batch mode is only enabled if core.fsyncObjectFiles is false or unset.
>
> _Performance numbers_:
>
> Linux - Hyper-V VM running Kernel 5.11 (Ubuntu 20.04) on a fast SSD.
> Mac - macOS 11.5.1 running on a Mac mini on a 1TB Apple SSD.
> Windows - Same host as Linux, a preview version of Windows 11.
>
> Adding 500 files to the repo with 'git add' Times reported in seconds.
>
> object file syncing | Linux | Mac | Windows
> --------------------|-------|-------|--------
> disabled | 0.06 | 0.35 | 0.61
> fsync | 1.88 | 11.18 | 2.47
> batch | 0.15 | 0.41 | 1.53
>
> Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
> ---
> Documentation/config/core.txt | 5 +++
> bulk-checkin.c | 67 +++++++++++++++++++++++++++++++++++
> bulk-checkin.h | 2 ++
> cache.h | 8 ++++-
> config.c | 2 ++
> object-file.c | 2 ++
> 6 files changed, 85 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/config/core.txt b/Documentation/config/core.txt
> index 062e5259905..c041ed33801 100644
> --- a/Documentation/config/core.txt
> +++ b/Documentation/config/core.txt
> @@ -628,6 +628,11 @@ core.fsyncMethod::
> * `writeout-only` issues pagecache writeback requests, but depending on the
> filesystem and storage hardware, data added to the repository may not be
> durable in the event of a system crash. This is the default mode on macOS.
> +* `batch` enables a mode that uses writeout-only flushes to stage multiple
> + updates in the disk writeback cache and then a single full fsync to trigger
> + the disk cache flush at the end of the operation. This mode is expected to
> + be as safe as `fsync` on macOS for repos stored on HFS+ or APFS filesystems
> + and on Windows for repos stored on NTFS or ReFS filesystems.
This mode will not be supported by all parts of our stack that use our
new fsync infra. So I think we should both document that some parts of
the stack don't support batching, and say what the fallback behaviour is
for those that don't.
> core.fsyncObjectFiles::
> This boolean will enable 'fsync()' when writing object files.
> diff --git a/bulk-checkin.c b/bulk-checkin.c
> index 93b1dc5138a..5c13fe17802 100644
> --- a/bulk-checkin.c
> +++ b/bulk-checkin.c
> @@ -3,14 +3,20 @@
> */
> #include "cache.h"
> #include "bulk-checkin.h"
> +#include "lockfile.h"
> #include "repository.h"
> #include "csum-file.h"
> #include "pack.h"
> #include "strbuf.h"
> +#include "string-list.h"
> +#include "tmp-objdir.h"
> #include "packfile.h"
> #include "object-store.h"
>
> static int bulk_checkin_plugged;
> +static int needs_batch_fsync;
> +
> +static struct tmp_objdir *bulk_fsync_objdir;
>
> static struct bulk_checkin_state {
> char *pack_tmp_name;
> @@ -80,6 +86,34 @@ clear_exit:
> reprepare_packed_git(the_repository);
> }
>
> +/*
> + * Cleanup after batch-mode fsync_object_files.
> + */
> +static void do_batch_fsync(void)
> +{
> + /*
> + * Issue a full hardware flush against a temporary file to ensure
> + * that all objects are durable before any renames occur. The code in
> + * fsync_loose_object_bulk_checkin has already issued a writeout
> + * request, but it has not flushed any writeback cache in the storage
> + * hardware.
> + */
> +
> + if (needs_batch_fsync) {
> + struct strbuf temp_path = STRBUF_INIT;
> + struct tempfile *temp;
> +
> + strbuf_addf(&temp_path, "%s/bulk_fsync_XXXXXX", get_object_directory());
> + temp = xmks_tempfile(temp_path.buf);
> + fsync_or_die(get_tempfile_fd(temp), get_tempfile_path(temp));
> + delete_tempfile(&temp);
> + strbuf_release(&temp_path);
> + }
> +
> + if (bulk_fsync_objdir)
> + tmp_objdir_migrate(bulk_fsync_objdir);
> +}
> +
We never unset `bulk_fsync_objdir` anywhere. Shouldn't we be doing that
when we unplug this infrastructure?
Patrick
> static int already_written(struct bulk_checkin_state *state, struct object_id *oid)
> {
> int i;
> @@ -274,6 +308,24 @@ static int deflate_to_pack(struct bulk_checkin_state *state,
> return 0;
> }
>
> +void fsync_loose_object_bulk_checkin(int fd)
> +{
> + /*
> + * If we have a plugged bulk checkin, we issue a call that
> + * cleans the filesystem page cache but avoids a hardware flush
> + * command. Later on we will issue a single hardware flush
> + * before as part of do_batch_fsync.
> + */
> + if (bulk_checkin_plugged &&
> + git_fsync(fd, FSYNC_WRITEOUT_ONLY) >= 0) {
> + assert(bulk_fsync_objdir);
> + if (!needs_batch_fsync)
> + needs_batch_fsync = 1;
> + } else {
> + fsync_or_die(fd, "loose object file");
> + }
> +}
> +
> int index_bulk_checkin(struct object_id *oid,
> int fd, size_t size, enum object_type type,
> const char *path, unsigned flags)
> @@ -288,6 +340,19 @@ int index_bulk_checkin(struct object_id *oid,
> void plug_bulk_checkin(void)
> {
> assert(!bulk_checkin_plugged);
> +
> + /*
> + * A temporary object directory is used to hold the files
> + * while they are not fsynced.
> + */
> + if (batch_fsync_enabled(FSYNC_COMPONENT_LOOSE_OBJECT)) {
> + bulk_fsync_objdir = tmp_objdir_create("bulk-fsync");
> + if (!bulk_fsync_objdir)
> + die(_("Could not create temporary object directory for core.fsyncobjectfiles=batch"));
> +
> + tmp_objdir_replace_primary_odb(bulk_fsync_objdir, 0);
> + }
> +
> bulk_checkin_plugged = 1;
> }
>
> @@ -297,4 +362,6 @@ void unplug_bulk_checkin(void)
> bulk_checkin_plugged = 0;
> if (bulk_checkin_state.f)
> finish_bulk_checkin(&bulk_checkin_state);
> +
> + do_batch_fsync();
> }
> diff --git a/bulk-checkin.h b/bulk-checkin.h
> index b26f3dc3b74..08f292379b6 100644
> --- a/bulk-checkin.h
> +++ b/bulk-checkin.h
> @@ -6,6 +6,8 @@
>
> #include "cache.h"
>
> +void fsync_loose_object_bulk_checkin(int fd);
> +
> int index_bulk_checkin(struct object_id *oid,
> int fd, size_t size, enum object_type type,
> const char *path, unsigned flags);
> diff --git a/cache.h b/cache.h
> index d347d0757f7..4d07691e791 100644
> --- a/cache.h
> +++ b/cache.h
> @@ -1040,7 +1040,8 @@ extern int use_fsync;
>
> enum fsync_method {
> FSYNC_METHOD_FSYNC,
> - FSYNC_METHOD_WRITEOUT_ONLY
> + FSYNC_METHOD_WRITEOUT_ONLY,
> + FSYNC_METHOD_BATCH
> };
>
> extern enum fsync_method fsync_method;
> @@ -1766,6 +1767,11 @@ void fsync_or_die(int fd, const char *);
> int fsync_component(enum fsync_component component, int fd);
> void fsync_component_or_die(enum fsync_component component, int fd, const char *msg);
>
> +static inline int batch_fsync_enabled(enum fsync_component component)
> +{
> + return (fsync_components & component) && (fsync_method == FSYNC_METHOD_BATCH);
> +}
> +
> ssize_t read_in_full(int fd, void *buf, size_t count);
> ssize_t write_in_full(int fd, const void *buf, size_t count);
> ssize_t pread_in_full(int fd, void *buf, size_t count, off_t offset);
> diff --git a/config.c b/config.c
> index 261ee7436e0..0b28f90de8b 100644
> --- a/config.c
> +++ b/config.c
> @@ -1688,6 +1688,8 @@ static int git_default_core_config(const char *var, const char *value, void *cb)
> fsync_method = FSYNC_METHOD_FSYNC;
> else if (!strcmp(value, "writeout-only"))
> fsync_method = FSYNC_METHOD_WRITEOUT_ONLY;
> + else if (!strcmp(value, "batch"))
> + fsync_method = FSYNC_METHOD_BATCH;
> else
> warning(_("ignoring unknown core.fsyncMethod value '%s'"), value);
>
> diff --git a/object-file.c b/object-file.c
> index 295cb899e22..ef6621ffe56 100644
> --- a/object-file.c
> +++ b/object-file.c
> @@ -1894,6 +1894,8 @@ static void close_loose_object(int fd)
>
> if (fsync_object_files > 0)
> fsync_or_die(fd, "loose object file");
> + else if (batch_fsync_enabled(FSYNC_COMPONENT_LOOSE_OBJECT))
> + fsync_loose_object_bulk_checkin(fd);
> else
> fsync_component_or_die(FSYNC_COMPONENT_LOOSE_OBJECT, fd,
> "loose object file");
> --
> gitgitgadget
>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Neeraj Singh wrote (reply to this):
On Wed, Mar 16, 2022 at 12:31 AM Patrick Steinhardt <ps@pks.im> wrote:
>
> On Tue, Mar 15, 2022 at 09:30:54PM +0000, Neeraj Singh via GitGitGadget wrote:
> > From: Neeraj Singh <neerajsi@microsoft.com>
> > diff --git a/Documentation/config/core.txt b/Documentation/config/core.txt
> > index 062e5259905..c041ed33801 100644
> > --- a/Documentation/config/core.txt
> > +++ b/Documentation/config/core.txt
> > @@ -628,6 +628,11 @@ core.fsyncMethod::
> > * `writeout-only` issues pagecache writeback requests, but depending on the
> > filesystem and storage hardware, data added to the repository may not be
> > durable in the event of a system crash. This is the default mode on macOS.
> > +* `batch` enables a mode that uses writeout-only flushes to stage multiple
> > + updates in the disk writeback cache and then a single full fsync to trigger
> > + the disk cache flush at the end of the operation. This mode is expected to
> > + be as safe as `fsync` on macOS for repos stored on HFS+ or APFS filesystems
> > + and on Windows for repos stored on NTFS or ReFS filesystems.
>
> This mode will not be supported by all parts of our stack that use our
> new fsync infra. So I think we should both document that some parts of
> the stack don't support batching, and say what the fallback behaviour is
> for those that don't.
>
Can do. I'm hoping that you'll revive your batch-mode refs change too so that
we get batching across the ODB and Refs, which are the two data stores that
may receive many updates in a single Git command. This documentation
comment will read:
```
* `batch` enables a mode that uses writeout-only flushes to stage multiple
updates in the disk writeback cache and then does a single full fsync of
a dummy file to trigger the disk cache flush at the end of the operation.
Currently `batch` mode only applies to loose-object files. Other repository
data is made durable as if `fsync` was specified. This mode is expected to
be as safe as `fsync` on macOS for repos stored on HFS+ or APFS filesystems
and on Windows for repos stored on NTFS or ReFS filesystems.
```
> > core.fsyncObjectFiles::
> > This boolean will enable 'fsync()' when writing object files.
> > diff --git a/bulk-checkin.c b/bulk-checkin.c
> > index 93b1dc5138a..5c13fe17802 100644
> > --- a/bulk-checkin.c
> > +++ b/bulk-checkin.c
> > @@ -3,14 +3,20 @@
> > */
> > #include "cache.h"
> > #include "bulk-checkin.h"
> > +#include "lockfile.h"
> > #include "repository.h"
> > #include "csum-file.h"
> > #include "pack.h"
> > #include "strbuf.h"
> > +#include "string-list.h"
> > +#include "tmp-objdir.h"
> > #include "packfile.h"
> > #include "object-store.h"
> >
> > static int bulk_checkin_plugged;
> > +static int needs_batch_fsync;
> > +
> > +static struct tmp_objdir *bulk_fsync_objdir;
> >
> > static struct bulk_checkin_state {
> > char *pack_tmp_name;
> > @@ -80,6 +86,34 @@ clear_exit:
> > reprepare_packed_git(the_repository);
> > }
> >
> > +/*
> > + * Cleanup after batch-mode fsync_object_files.
> > + */
> > +static void do_batch_fsync(void)
> > +{
> > + /*
> > + * Issue a full hardware flush against a temporary file to ensure
> > + * that all objects are durable before any renames occur. The code in
> > + * fsync_loose_object_bulk_checkin has already issued a writeout
> > + * request, but it has not flushed any writeback cache in the storage
> > + * hardware.
> > + */
> > +
> > + if (needs_batch_fsync) {
> > + struct strbuf temp_path = STRBUF_INIT;
> > + struct tempfile *temp;
> > +
> > + strbuf_addf(&temp_path, "%s/bulk_fsync_XXXXXX", get_object_directory());
> > + temp = xmks_tempfile(temp_path.buf);
> > + fsync_or_die(get_tempfile_fd(temp), get_tempfile_path(temp));
> > + delete_tempfile(&temp);
> > + strbuf_release(&temp_path);
> > + }
> > +
> > + if (bulk_fsync_objdir)
> > + tmp_objdir_migrate(bulk_fsync_objdir);
> > +}
> > +
>
> We never unset `bulk_fsync_objdir` anywhere. Shouldn't we be doing that
> when we unplug this infrastructure?
>
Will Fix.
Thanks,
Neeraj
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Patrick Steinhardt wrote (reply to this):
On Wed, Mar 16, 2022 at 11:21:56AM -0700, Neeraj Singh wrote:
> On Wed, Mar 16, 2022 at 12:31 AM Patrick Steinhardt <ps@pks.im> wrote:
> >
> > On Tue, Mar 15, 2022 at 09:30:54PM +0000, Neeraj Singh via GitGitGadget wrote:
> > > From: Neeraj Singh <neerajsi@microsoft.com>
> > > diff --git a/Documentation/config/core.txt b/Documentation/config/core.txt
> > > index 062e5259905..c041ed33801 100644
> > > --- a/Documentation/config/core.txt
> > > +++ b/Documentation/config/core.txt
> > > @@ -628,6 +628,11 @@ core.fsyncMethod::
> > > * `writeout-only` issues pagecache writeback requests, but depending on the
> > > filesystem and storage hardware, data added to the repository may not be
> > > durable in the event of a system crash. This is the default mode on macOS.
> > > +* `batch` enables a mode that uses writeout-only flushes to stage multiple
> > > + updates in the disk writeback cache and then a single full fsync to trigger
> > > + the disk cache flush at the end of the operation. This mode is expected to
> > > + be as safe as `fsync` on macOS for repos stored on HFS+ or APFS filesystems
> > > + and on Windows for repos stored on NTFS or ReFS filesystems.
> >
> > This mode will not be supported by all parts of our stack that use our
> > new fsync infra. So I think we should both document that some parts of
> > the stack don't support batching, and say what the fallback behaviour is
> > for those that don't.
> >
>
> Can do. I'm hoping that you'll revive your batch-mode refs change too so that
> we get batching across the ODB and Refs, which are the two data stores that
> may receive many updates in a single Git command.
Huh, I completely forgot that my previous implementation already had
such a mechanism. I may have a go at it again, but it would take me a
while given that I'll be OOO most of April.
> This documentation
> comment will read:
> ```
> * `batch` enables a mode that uses writeout-only flushes to stage multiple
> updates in the disk writeback cache and then does a single full fsync of
> a dummy file to trigger the disk cache flush at the end of the operation.
> Currently `batch` mode only applies to loose-object files. Other repository
> data is made durable as if `fsync` was specified. This mode is expected to
> be as safe as `fsync` on macOS for repos stored on HFS+ or APFS filesystems
> and on Windows for repos stored on NTFS or ReFS filesystems.
> ```
Reads good to me, thanks!
Patrick
Documentation/config/core.txt
Outdated
@@ -62,22 +62,54 @@ core.protectNTFS:: | |||
Defaults to `true` on Windows, and `false` elsewhere. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Bagas Sanjaya wrote (reply to this):
On 16/03/22 04.30, Neeraj Singh via GitGitGadget wrote:
> On a filesystem with a singular journal that is updated during name
> operations (e.g. create, link, rename, etc), such as NTFS, HFS+, or XFS
> we would expect the fsync to trigger a journal writeout so that this
> sequence is enough to ensure that the user's data is durable by the time
> the git command returns.
> But what about ext4? Will fsync-ing trigger writing journal?
--
An old man doll... just what I always wanted! - Clara
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Neeraj Singh wrote (reply to this):
On Wed, Mar 16, 2022 at 4:50 AM Bagas Sanjaya <bagasdotme@gmail.com> wrote:
>
> On 16/03/22 04.30, Neeraj Singh via GitGitGadget wrote:
> > On a filesystem with a singular journal that is updated during name
> > operations (e.g. create, link, rename, etc), such as NTFS, HFS+, or XFS
> > we would expect the fsync to trigger a journal writeout so that this
> > sequence is enough to ensure that the user's data is durable by the time
> > the git command returns.
> >
>
> But what about ext4? Will fsync-ing trigger writing journal?
>
That's a good question. So I did an experiment on ext4 which gives me
some confidence:
Here's my ext4 configuration: /dev/sdc on / type ext4
(rw,relatime,discard,errors=remount-ro,data=ordered)
I added a new mode called core.fsyncMethod=batch-extra-fsync. This
issues an extra open,fsync,close during migration from the tmp-objdir
(which I confirmed is really happening using strace). The added cost
of this extra operation is relatively small compared to
core.fsyncMethod=fsync. That leads me to believe that (barring fs
bugs), ext4 thinks that the data is already sufficiently durable that
it doesn't need to issue an extra disk cache flush. See
https://github.com/neerajsi-msft/git/commit/131466dd95165efc5c480d971c69ea1e9182657e
for the test code. I don't particularly want to add this as a
built-in mode at this point since it will be somewhat hard to document
which mode a user should choose.
Thanks,
Neeraj
User |
928139a
to
e787b35
Compare
This branch is now known as |
This patch series was integrated into seen via git@e57a02e. |
This patch series was integrated into seen via git@ff16f42. |
This patch series was integrated into seen via git@f5eb924. |
This patch series was integrated into seen via git@27dd460. |
This patch series was integrated into master via git@27dd460. |
This patch series was integrated into next via git@27dd460. |
Closed via 27dd460. |
@dscho: Would it be possible to reopen this PR? It was closed accidentally since Junio picked the matching branch name for this PR for a different set of changes. |
/submit |
Error: git fetch https://github.com/gitgitgadget/git -- +refs/notes/gitgitgadget:refs/notes/gitgitgadget +refs/heads/maint:refs/remotes/upstream/maint +refs/heads/seen:refs/remotes/upstream/seen +refs/heads/master:refs/remotes/upstream/master +refs/heads/next:refs/remotes/upstream/next +refs/tags/pr-1134/neerajsi-msft/ns/batched-fsync-v5:refs/tags/pr-1134/neerajsi-msft/ns/batched-fsync-v5 +refs/pull/1134/head:refs/pull/1134/head +refs/pull/1134/merge:refs/pull/1134/merge failed: 128, |
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
On the Git mailing list, nksingh85@gmail.com wrote (reply to this):
|
Unfortunately, there does not seem to be any way to reopen the PR, not even for me :frown: |
On the Git mailing list, Junio C Hamano wrote (reply to this):
|
On the Git mailing list, Junio C Hamano wrote (reply to this): Junio C Hamano <gitster@pobox.com> writes:
> nksingh85@gmail.com writes:
>
>> From: Neeraj Singh <neerajsi@microsoft.com>
>>
>> GGG closed this series erroneously, so I'm trying out
>> git-send-email. Apologies for any mistakes.
>>
>> This series is also available at
>> https://github.com/neerajsi-msft/git/git.git ns/batched-fsync-v6.
>>
>> V6 changes:
We haven't heard anything on this topic from anybody for this round.
I am planning to merge it to 'next' soonish.
Please speak up if anybody has concerns.
Thanks.
|
On the Git mailing list, Neeraj Singh wrote (reply to this): On 5/19/2022 2:47 PM, Junio C Hamano wrote:
> Junio C Hamano <gitster@pobox.com> writes:
> >> nksingh85@gmail.com writes:
>>
>>> From: Neeraj Singh <neerajsi@microsoft.com>
>>>
>>> GGG closed this series erroneously, so I'm trying out
>>> git-send-email. Apologies for any mistakes.
>>>
>>> This series is also available at
>>> https://github.com/neerajsi-msft/git/git.git ns/batched-fsync-v6.
>>>
>>> V6 changes:
> > We haven't heard anything on this topic from anybody for this round.
> I am planning to merge it to 'next' soonish.
> > Please speak up if anybody has concerns.
> > Thanks.
> No updates on my end. I'll keep my eyes out for any reports of regression.
Thanks,
Neeraj |
On the Git mailing list, Johannes Schindelin wrote (reply to this): Hi Neeraj,
On Thu, 19 May 2022, Neeraj Singh wrote:
> On 5/19/2022 2:47 PM, Junio C Hamano wrote:
> > Junio C Hamano <gitster@pobox.com> writes:
> >
> > > nksingh85@gmail.com writes:
> > >
> > > > From: Neeraj Singh <neerajsi@microsoft.com>
> > > >
> > > > GGG closed this series erroneously, so I'm trying out
> > > > git-send-email. Apologies for any mistakes.
> > > >
> > > > This series is also available at
> > > > https://github.com/neerajsi-msft/git/git.git ns/batched-fsync-v6.
> > > >
> > > > V6 changes:
> >
> > We haven't heard anything on this topic from anybody for this round.
> > I am planning to merge it to 'next' soonish.
> >
> > Please speak up if anybody has concerns.
> >
> > Thanks.
> >
>
> No updates on my end. I'll keep my eyes out for any reports of regression.
I asked a colleague to have a go with these patches and the only concern I
heard back was that with ext4's new `fast_commit` feature, the `fsync`
seems not to actually flush all metadata. They indicated that they'd be
happy with merely documenting this issue, and also pointed out that the
`fast_commit` feature seems still not to be considered ready for
production workloads.
So: 👍 from my side.
Ciao,
Dscho |
V6 changes:
V5 changes:
V4 changes:
V3 changes:
V2 changes:
--Original definition--
When core.fsync includes
loose-object
, we issue an fsync after every written object. For a 'git-add' or similar command that adds a lot of files to the repo, the costs of these fsyncs adds up. One major factor in this cost is the time it takes for the physical storage controller to flush its caches to durable media.This series takes advantage of the writeout-only mode of git_fsync to issue OS cache writebacks for all of the objects being added to the repository followed by a single fsync to a dummy file, which should trigger a filesystem log flush and storage controller cache flush. This mechanism is known to be safe on common Windows filesystems and expected to be safe on macOS. Some linux filesystems, such as XFS, will probably do the right thing as well. See [1] for previous discussion on the predecessor of this patch series.
This series is important on Windows, where loose-objects are included in the fsync set by default in Git-For-Windows. In this series, I'm also setting the default mode for Windows to turn on loose object fsyncing with batch mode, so that we can get CI coverage of the actual git-for-windows configuration upstream. We still don't actually issue fsyncs for the test suite since GIT_TEST_FSYNC is set to 0, but we exercise all of the surrounding batch mode code.
This work is based on 'next' at c54b8eb. It's dependent on ns/core-fsyncmethod.
[1] https://lore.kernel.org/git/2c1ddef6057157d85da74a7274e03eacf0374e45.1629856293.git.gitgitgadget@gmail.com/
cc: Johannes.Schindelin@gmx.de
cc: avarab@gmail.com
cc: nksingh85@gmail.com
cc: ps@pks.im
cc: jeffhost@microsoft.com
cc: Bagas Sanjaya bagasdotme@gmail.com
cc: worldhello.net@gmail.com