-
Notifications
You must be signed in to change notification settings - Fork 69
coll/libnbc: do not handle MPI_IN_PLACE in neighborhood collectives #1004
coll/libnbc: do not handle MPI_IN_PLACE in neighborhood collectives #1004
Conversation
b9e738c
to
f277bea
Compare
Test PASSed. |
Test PASSed. |
@ggouaillardet can you please request a reviewer? |
:bot:assign: @bosilca see my comments at http://www.open-mpi.org/community/lists/users/2016/03/28656.php |
@ggouaillardet I don't think this patch is correct. The MPI_INEIGHBOR_ALLTOALLW function (defined in MPI 3.1 page 328) identifies rdispl as the
Thus, your patch that does compute the receive buffer for each MPI_Recv using the recvtypes and recvcounts for the particular peer does not respect the MPI standard. I think the correct receive buffer for each planned receive should be |
@bosilca as i wrote in the ML, i had several interrogations
bottom line, if the user test case is valid from the MPI standard, then i think this commit is helping. |
@ggouaillardet I am sorry I missed both the fact that you were in the INPLACE branch and your email on the mailing list. Let me try to quickly remedy to this.
Thus, the inplace code should be entirely stripped out. I guess @hjelmn reached the same conclusion when he removed the inplace support from master for all the neighborhood collectives (d42e096) (and here I am not talking about the MPI_IN_PLACE which has no meaning for the neighborhood collective). |
@bosilca i quickly checked the standard and i reached the same conclusion
|
f277bea
to
f3ddb03
Compare
MPI_IN_PLACE is not a valid send buffer for neighborhood collectives, so just ignore it here. This commit is a small subset of open-mpi/ompi@d42e096 Thanks Jun Kudo for the report.
f3ddb03
to
bbfa865
Compare
Test PASSed. |
Test PASSed. |
Test PASSed. |
…PI_Ineighbor_all* (cherry picked from commit open-mpi/ompi@3b0b929)
c952464
to
ea2fcb9
Compare
Test PASSed. |
@bosilca Can you have a look at this? |
The patch removes all support for MPI_INPLACE from neighborhood collective, which put us in compliance with the MPI standard. 👍 |
@hppritcha Good to go. |
well, I approve it too 😄 |
This is a one off and a quick fix, and it unlikely works with complex types
(e.g. negative lower bound).
libnbc considers MPI_Ineighbor_alltoallw is in place when send and recv buffers are identical,
and even if they do not overlap because of distinct displacements.
Thanks Jun Kudo for the bug report.