Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix some issues with collective metadata reads for chunked datasets #3716

Merged
merged 5 commits into from
Oct 24, 2023

Conversation

jhendersonHDF
Copy link
Collaborator

Add functions/callbacks for explicit control over chunk index open/close

Add functions/callbacks to check if chunk index is open or not so that it can be opened if necessary before temporarily disabling collective metadata reads in the library

Add functions/callbacks for requesting loading of additional chunk index metadata beyond the chunk index itself

@jhendersonHDF jhendersonHDF added Merge - To 1.14 Priority - 1. High 🔼 These are important issues that should be resolved in the next release Component - C Library Core C library issues (usually in the src directory) Component - Parallel Parallel HDF5 (NOT thread-safety) Type - Bug / Bugfix Please report security issues to help@hdfgroup.org instead of creating an issue on GitHub labels Oct 19, 2023
@jhendersonHDF
Copy link
Collaborator Author

This PR is meant to address the issues found and reported on in section 3.2 of
HDF5_Collective_IO_Performance_Investigation.pdf and also reported in #2658. It's unclear if the independent reads in #2658 are specifically from the extensible array structure, header and/or index block, but these changes fix collective reading of all those, so it should at least help there.

Add functions/callbacks for explicit control over chunk index open/close

Add functions/callbacks to check if chunk index is open or not so
that it can be opened if necessary before temporarily disabling
collective metadata reads in the library

Add functions/callbacks for requesting loading of additional chunk
index metadata beyond the chunk index itself
*-------------------------------------------------------------------------
*/
static herr_t
H5D__btree_idx_load_metadata(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this load the root node?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For v1 btrees, it looks like the search starts from metadata that will already be in the cache as a consequence of opening the index.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the root node is loaded collectively?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, at least for v1 btrees the metadata needed should already be loaded collectively after the "index open" call

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

H5D__btree_idx_open() is a no-op. Does it happen in some other function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes sorry, on second look it doesn't occur as part of the index open call, it seems that the v1 B-tree root node exists directly at the file address for the chunk index and so no open really occurs. This may need to be revisited, but v1 btrees seem to be less straightforward than other index types

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could implement it the same way as the other indices with an address query of chunk 0 (though see my notes on the b-tree 2 implementation)

*-------------------------------------------------------------------------
*/
static herr_t
H5D__bt2_idx_load_metadata(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about loading root note

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, yes looks like I should probably add a lookup here for the root node.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might take too much time to get in for 1.14.3 since I think you'd need to add a H5B2 function, but something to keep in mind for later

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like performing a bt2 get_addr call will have the same effect as earray and farray where in this case a fake chunk lookup will make sure the root node is read in collectively. We should eventually update the interfaces behind the chunk indices to have a more explicit way of controlling this in the future though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will also open other some internal nodes and the leaf node for chunk 0, which is honestly fine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though since there's multiple steps (need to read each node to get the address of the node further down) it could be wasted overhead if no process actually needs any data on that "side" of the dataset. But again, something to keep in mind for later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking this further, we could, after collectively loading only the root node, collectively load every node at the next level down in a single operation, then again at the next level, and so on (up to a limit that maybe could be set depending on the size of the metadata cache and the average node size)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To prime the metadata cache in the most efficient way possible

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that without having explicit control over loading the root node, there may be some wasted effort in loading the data for that chunk if no one needs it. I suppose the hope is that if the user has collective metadata reads enabled, the benefit of having the root node loaded at scale will be more worth it. That and the ranks that are interested in the chunk won't end up reading it independently.

src/H5Dbtree2.c Outdated
@@ -854,7 +920,10 @@ H5D__bt2_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
assert(H5_addr_defined(udata->chunk_block.offset));

/* Check if the v2 B-tree is open yet */
if (NULL == idx_info->storage->u.btree2.bt2) {
if (H5D__bt2_idx_is_open(idx_info, &index_open) < 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little nervous about potential function call overhead here (and other places that call this function within this file). It might get inlined, but maybe we should create a macro here to be safe?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be very surprised if there was any overhead at all, but I could create a macro that the is_open function calls as well.

src/H5Dearray.c Outdated
@@ -953,7 +1016,10 @@ H5D__earray_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata
assert(udata);

/* Check if the extensible array is open yet */
if (NULL == idx_info->storage->u.earray.ea) {
if (H5D__earray_idx_is_open(idx_info, &index_open) < 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about function call overhead

src/H5Dfarray.c Outdated
@@ -901,7 +963,10 @@ H5D__farray_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata
assert(udata);

/* Check if the fixed array is open yet */
if (NULL == idx_info->storage->u.farray.fa) {
if (H5D__farray_idx_is_open(idx_info, &index_open) < 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about function call overhead

Fix comment in fixed array load_metadata callback
@fortnern
Copy link
Member

Other than the is_open macros this looks good to me. We should keep in mind the idea of collective pre-loading of the root node as a potential performance enhancement

@derobins derobins merged commit 8cff88c into HDFGroup:develop Oct 24, 2023
40 checks passed
jhendersonHDF added a commit to jhendersonHDF/hdf5 that referenced this pull request Oct 24, 2023
…DFGroup#3716)

Add functions/callbacks for explicit control over chunk index open/close

Add functions/callbacks to check if chunk index is open or not so
that it can be opened if necessary before temporarily disabling
collective metadata reads in the library

Add functions/callbacks for requesting loading of additional chunk
index metadata beyond the chunk index itself
lrknox pushed a commit that referenced this pull request Oct 25, 2023
* Add missing test files to distclean target (#3734)

Cleans up new files in Autotools `make distclean` in the test directory

* Add tools/libtest to Autotools builds (#3735)

This was only added to CMake many years ago and tests the tools
library.

* Clean up onion VFD files in tools `make clean` (#3739)

Cleans up h5dump and h5diff *.onion files in the Autotools when
runing `make clean`.

* Clean Java test files on Autotools (#3740)

Removes generated HDF5 and text output files when running `make clean`.

* Clean the flushrefresh test dir on Autotools (#3741)

The flushrefresh_test directory was not being cleaned up w/
`make clean` under the Autotools

* Fix file names in tfile.c (#3743)

Some tests in tfile.c use h5_fileaccess to get a VFD-dependent file
name but use the scheme from testhdf5, reusing the FILE1 and FILE8
names. This leads to files like test1.h5.h5 which are unintended
and not cleaned up.

This changes the filename scheme for a few tests to work with h5test,
resulting in more informative names and allowing the files to
be cleaned up at the end of the test. The test files have also
been added to the `make clean` target for the Autotools.

* Clean Autotools files in parallel tests (#3744)

Adds missing files to `make clean` for parallel, including Fortran.

* Add native VOL checks to deprecated functions (#3647)

* Add native VOL checks to deprecated functions

* Remove unneeded native VOL checks

* Move native checks to top level calls

* Fix buffer overflow in cache debugging code (#3691)

* update stat arg for apple (#3726)

* update stat arg for apple

* use H5_HAVE_DARWIN for Apple ifdef

* fix typo

* removed duplicate H5_ih_info_t

* added fortran async test to cmake

* Fix windows cpack error in WiX package. (#3747)

* Add a simple cache to the ros3 VFD (#3753)

Adds a small cache of the first N bytes of a file opened with the
read-only S3 (ros3) VFD, where N is 4kiB or the size of the file,
whichever is smaller. This avoids a lot of small I/O operations
on file open.

Addresses GitHub issue #3381

* Update Autotools to correctly configure oneAPI (#3751)

* Update Autotools to correctly configure oneAPI

Splits the Intel config files under the Autotools into 'classic'
Intel and oneAPI versions, fixing 'unsupported option' messages.

Also turns off `-check uninit` (new in 2023) in Fortran, which kills
the H5_buildiface program due to false positives.

* Enable Fortran in oneAPI CI workflow

* Turn on Fortran in CMake, update LD_LIBRARY_PATH

* Go back to disabling Fortran w/ Intel

For some reason there's a linking problem w/ Fortran

error while loading shared libraries: libifport.so.5: cannot open shared object file: No such file or directory

* Add h5pget_actual_selection_io_mode fortran wrapper (#3746)

* added h5pget_actual_selection_io_mode_f test

* added tests for h5pget_actual_selection_io_mode_f

* fixed int_f type conversion

* Update fortran action step (#3748)

* Added missing DLL for H5PGET_ACTUAL_SELECTION_IO_MODE_F (#3760)

* add missing H5PGET_ACTUAL_SELECTION_IO_MODE_F dll

* Bump the ros3 VFD cache to 16 MiB (#3759)

* Fix hangs during collective I/O with independent metadata writes (#3693)

* Fix some issues with collective metadata reads for chunked datasets (#3716)

Add functions/callbacks for explicit control over chunk index open/close

Add functions/callbacks to check if chunk index is open or not so
that it can be opened if necessary before temporarily disabling
collective metadata reads in the library

Add functions/callbacks for requesting loading of additional chunk
index metadata beyond the chunk index itself

* Fix failure in t_select_io_dset when run with more than 10 ranks (#3758)

* Fix H5Pset_evict_on_close failing regardless of actual parallel use (#3761)

Allow H5Pset_evict_on_close to be called regardless of whether a parallel build of HDF5 is being used

Fail during file opens if H5Pset_evict_on_close has been set to true on the given File Access Property List and the size of the MPI communicator being used is greater than 1
@jhendersonHDF jhendersonHDF deleted the coll_md_refactor branch October 26, 2023 01:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component - C Library Core C library issues (usually in the src directory) Component - Parallel Parallel HDF5 (NOT thread-safety) Priority - 1. High 🔼 These are important issues that should be resolved in the next release Type - Bug / Bugfix Please report security issues to help@hdfgroup.org instead of creating an issue on GitHub
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants