Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow parallel filters feature for comm size of 1 #840

Merged

Conversation

jhendersonHDF
Copy link
Collaborator

No description provided.

@jhendersonHDF jhendersonHDF force-pushed the parallel_filters_serial_writes branch from 437a611 to db3fb5c Compare July 15, 2021 03:49
Copy link
Contributor

@qkoziol qkoziol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No objections, but there's a whole host of places where a communicator of size 1 should/could behave differently from a communicator with >1 ranks.

@jhendersonHDF
Copy link
Collaborator Author

jhendersonHDF commented Jul 15, 2021

No objections, but there's a whole host of places where a communicator of size 1 should/could behave differently from a communicator with >1 ranks.

Would this be cases like initializing with MPI_Init_thread()?

EDIT: There are two other places in the library where we check for a comm_size > 1 and then do something in parallel if we're just using a comm_size of 1. One is just an optimization in H5Fsuper.c for reading the superblock from only 1 rank, so I don't think that one really matters if the communicator behaves differently from a communicator with >1 ranks. The other is in the page buffer code where we DON'T bypass the page buffer in parallel if comm_size is 1. In that case, I think this also might have the same issue? However, that code is currently #ifdef'ed out because apparently that check stopped working at some point. I think the main takeaway is, if checking for comm_size of 1 isn't good enough here, we probably shouldn't merge this or should add extra checks here and maybe in the page buffer code as well.

@lrknox lrknox merged commit 50a37fd into HDFGroup:develop Jul 20, 2021
@jhendersonHDF jhendersonHDF deleted the parallel_filters_serial_writes branch January 12, 2022 23:19
jhendersonHDF added a commit to jhendersonHDF/hdf5 that referenced this pull request Mar 25, 2022
lrknox pushed a commit that referenced this pull request Mar 25, 2022
* Use internal version of H5Eprint2 to avoid possible stack overflow (#661)

* Add support for parallel filters to h5repack (#832)

* Allow parallel filters feature for comm size of 1 (#840)

* Avoid popping API context when one wasn't pushed (#848)

* Fix several warnings (#720)

* Don't allow H5Pset(get)_all_coll_metadata_ops for DXPLs (#1201)

* Fix free list tracking and cleanup cast alignment warnings (#1288)

* Fix free list tracking and cleanup cast alignment warnings

* Add free list tracking code to H5FL 'arr' routines

* Fix usage of several HDfprintf format specifiers after HDfprintf removal (#1324)

* Use appropriate printf format specifiers for haddr_t and hsize_t types directly (#1340)

* Fix H5ACmpio dirty bytes creation debugging (#1357)

* Fix documentation for H5D_space_status_t enum values (#1372)

* Parallel rank0 deadlock fixes (#1183)

* Fix several places where rank 0 can skip past collective MPI operations on failure

* Committing clang-format changes

Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>

* Fix a few issues noted by LGTM (#1421)

* Fix cache sanity checking code by moving functions to wider scope (#1435)

* Fix metadata cache bug when resizing a pinned/protected entry (v2) (#1463)

* Disable memory alloc sanity checks by default for Autotools debug builds (#1468)

* Committing clang-format changes

Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
jhendersonHDF added a commit to jhendersonHDF/hdf5 that referenced this pull request Apr 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants