MPI (Message Passing Interface) is the standard for parallel, scientific programming. For whatever reason, the MPI commitee decided to drop C++ bindings a while ago. C bindings still exist, but being C, do not fit well into modern C++ programmes. mpiwrap seeks to solve that problem by providing a convenient C++ wrapper around the C bindings. Thus, if you are an avid C++ programmer who is disgusted by the horrible C interface of MPI, this is the library for you!
mpiwrap provides overload for seamless usage with all standard C++ types, plus std::vector
of these types. However, it is easy to provide user overloads for custom types. When useful, mpiwrap inserts additional checks in the form of asserts
in order to prevent size errors when using vectors. Furthermore, mpiwrap tries to generalize all MPI functions, allowing the user to utilize a wider range of use-cases hassle-free.
The best way to install mpiwrap is to clone the repository and add the add_subdirectory
and target_link
library command to your CMakeLists.txt
. This will automatically include MPI to your project (you need to install it separetly though). A simple project file might look like this:
cmake_minimum_required(VERSION 3.1)
#Define the project.
project(hello_mpi)
add_executable(hello_mpi main.cpp)
#Add the mpiwrap repository.
add_subdirectory(mpiwrap)
#Add the library to the project.
target_link_libraries(hello_mpi PRIVATE mpiwrap)
You can then try MPI with a little sample programme:
#include <mpiwrap/mpi.h>
int main(int argc, char **argv)
{
mpi::mpi init{argc, argv};
auto world_size = mpi::comm("world")->size();
auto world_rank = mpi::comm("world")->rank();
auto processor_name = mpi::processor_name();
std::cout << "Hello world from processor " << processor_name << ", rank " << world_rank << " out of " << world_size << " processors.\n";
return 0;
}
This is a list of the currently implemented MPI functions, and their usage with the mpiwrap wrapper. Values marked with bracket mean, that you have to substitute reasonable values there. For example: [COMM]
is a MPI_Comm
value, [VALUE]
and [BUCKET]
can be either single variables or std::vectors
, and [OP]
is a either a mpi
operation, a lambda, a functor, or a wrapped function. [CHUNKSIZE]
and [RANK]
are both positive integer values.
MPI Function | Implemented | Version | Usage with the mpiwrap wrapper |
---|---|---|---|
MPI_Abort | ❌ | ||
MPI_Accumulate | ❌ | ||
MPI_Add_error_class | ❌ | ||
MPI_Add_error_code | ❌ | ||
MPI_Add_error_string | ❌ | ||
MPI_Address | ❌ | ||
MPI_Aint_add | ❌ | ||
MPI_Aint_diff | ❌ | ||
MPI_Allgather | ✔️ | mpi::comm([COMM])->allgather([VALUE], [BUCKET]) |
|
MPI_Allgatherv | ❌ | ||
MPI_Alloc_mem | ❌ | ||
MPI_Allreduce | ✔️ | mpi::comm([COMM])->allreduce([VALUE], [BUCKET], [OP]) |
|
MPI_Alltoall | ✔️ | mpi::comm([COMM])->alltoall([VALUE], [BUCKET], [CHUNKSIZE]) |
|
MPI_Alltoallv | ❌ | ||
MPI_Alltoallw | ❌ | ||
MPI_Attr_delete | ❌ | ||
MPI_Attr_get | ❌ | ||
MPI_Attr_put | ❌ | ||
MPI_Barrier | ✔️ | mpi::comm([COMM])->barrier() |
|
MPI_Bcast | ✔️ | mpi::comm([COMM])->source([RANK])->bcast[VALUE]) |
|
MPI_Bsend | 🚫 | Will not be implemented because raw memory management is required. | |
MPI_Bsend_init | 🚫 | Will not be implemented because raw memory management is required. | |
MPI_Buffer_attach | 🚫 | Will not be implemented because raw memory management is required. | |
MPI_Buffer_detach | 🚫 | Will not be implemented because raw memory management is required. | |
MPI_Cancel | ✔️ | .cancel() on the mpi::request object. |
|
MPI_Cart_coords | ❌ | ||
MPI_Cart_create | ❌ | ||
MPI_Cart_get | ❌ | ||
MPI_Cart_map | ❌ | ||
MPI_Cart_rank | ❌ | ||
MPI_Cart_shift | ❌ | ||
MPI_Cart_sub | ❌ | ||
MPI_Cartdim_get | ❌ | ||
MPI_Close_port | ❌ | ||
MPI_Comm_accept | ❌ | ||
MPI_Comm_call_errhandler | ❌ | ||
MPI_Comm_compare | ✔️ | mpi::compare([COMM],[COMM]) |
|
MPI_Comm_connect | ❌ | ||
MPI_Comm_create | ❌ | ||
MPI_Comm_create_errhandler | ❌ | ||
MPI_Comm_create_group | ❌ | ||
MPI_Comm_create_keyval | ❌ | ||
MPI_Comm_delete_attr | ❌ | ||
MPI_Comm_disconnect | ❌ | ||
MPI_Comm_dup | ❌ | ||
MPI_Comm_dup_with_info | ❌ | ||
MPI_Comm_free | ❌ | ||
MPI_Comm_free_keyval | ❌ | ||
MPI_Comm_get_attr | ❌ | ||
MPI_Comm_get_errhandler | ❌ | ||
MPI_Comm_get_info | ❌ | ||
MPI_Comm_get_name | ✔️ | mpi::comm([COMM])->name() |
|
MPI_Comm_get_parent | ❌ | ||
MPI_Comm_group | ❌ | ||
MPI_Comm_idup | ❌ | ||
MPI_Comm_join | ❌ | ||
MPI_Comm_rank | ✔️ | mpi::comm([COMM])->rank() |
|
MPI_Comm_remote_group | ❌ | ||
MPI_Comm_remote_size | ❌ | ||
MPI_Comm_set_attr | ❌ | ||
MPI_Comm_set_errhandler | ❌ | ||
MPI_Comm_set_info | ❌ | ||
MPI_Comm_set_name | ❌ | ||
MPI_Comm_size | ✔️ | mpi::comm([COMM])->size() |
|
MPI_Comm_spawn | ❌ | ||
MPI_Comm_spawn_multiple | ❌ | ||
MPI_Comm_split | ❌ | ||
MPI_Comm_split_type | ❌ | ||
MPI_Comm_test_inter | ❌ | ||
MPI_Compare_and_swap | ❌ | ||
MPI_Dims_create | ❌ | ||
MPI_Dist_graph_create | ❌ | ||
MPI_Dist_graph_create_adjacent | ❌ | ||
MPI_Dist_graph_neighbors | ❌ | ||
MPI_Dist_graph_neighbors_count | ❌ | ||
MPI_Errhandler_create | ❌ | ||
MPI_Errhandler_free | ❌ | ||
MPI_Errhandler_get | ❌ | ||
MPI_Errhandler_set | ❌ | ||
MPI_Error_class | ❌ | ||
MPI_Error_string | ❌ | ||
MPI_Exscan | ❌ | ||
MPI_Fetch_and_op | ❌ | ||
MPI_File_c2f | ❌ | ||
MPI_File_call_errhandler | ❌ | ||
MPI_File_close | ❌ | ||
MPI_File_create_errhandler | ❌ | ||
MPI_File_delete | ❌ | ||
MPI_File_f2c | ❌ | ||
MPI_File_get_amode | ❌ | ||
MPI_File_get_atomicity | ❌ | ||
MPI_File_get_byte_offset | ❌ | ||
MPI_File_get_errhandler | ❌ | ||
MPI_File_get_group | ❌ | ||
MPI_File_get_info | ❌ | ||
MPI_File_get_position | ❌ | ||
MPI_File_get_position_shared | ❌ | ||
MPI_File_get_size | ❌ | ||
MPI_File_get_type_extent | ❌ | ||
MPI_File_get_view | ❌ | ||
MPI_File_iread | ❌ | ||
MPI_File_iread_all | ❌ | ||
MPI_File_iread_at | ❌ | ||
MPI_File_iread_at_all | ❌ | ||
MPI_File_iread_shared | ❌ | ||
MPI_File_iwrite | ❌ | ||
MPI_File_iwrite_all | ❌ | ||
MPI_File_iwrite_at | ❌ | ||
MPI_File_iwrite_at_all | ❌ | ||
MPI_File_iwrite_shared | ❌ | ||
MPI_File_open | ❌ | ||
MPI_File_preallocate | ❌ | ||
MPI_File_read | ❌ | ||
MPI_File_read_all | ❌ | ||
MPI_File_read_all_begin | ❌ | ||
MPI_File_read_all_end | ❌ | ||
MPI_File_read_at | ❌ | ||
MPI_File_read_at_all | ❌ | ||
MPI_File_read_at_all_begin | ❌ | ||
MPI_File_read_at_all_end | ❌ | ||
MPI_File_read_ordered | ❌ | ||
MPI_File_read_ordered_begin | ❌ | ||
MPI_File_read_ordered_end | ❌ | ||
MPI_File_read_shared | ❌ | ||
MPI_File_seek | ❌ | ||
MPI_File_seek_shared | ❌ | ||
MPI_File_set_atomicity | ❌ | ||
MPI_File_set_errhandler | ❌ | ||
MPI_File_set_info | ❌ | ||
MPI_File_set_size | ❌ | ||
MPI_File_set_view | ❌ | ||
MPI_File_sync | ❌ | ||
MPI_File_write | ❌ | ||
MPI_File_write_all | ❌ | ||
MPI_File_write_all_begin | ❌ | ||
MPI_File_write_all_end | ❌ | ||
MPI_File_write_at | ❌ | ||
MPI_File_write_at_all | ❌ | ||
MPI_File_write_at_all_begin | ❌ | ||
MPI_File_write_at_all_end | ❌ | ||
MPI_File_write_ordered | ❌ | ||
MPI_File_write_ordered_begin | ❌ | ||
MPI_File_write_ordered_end | ❌ | ||
MPI_File_write_shared | ❌ | ||
MPI_Finalize | ✔️ | MPI_Finalize is automatically called when mpi::mpi is destroyed. |
|
MPI_Finalized | ✔️ | mpi::finalized() |
|
MPI_Free_mem | ❌ | ||
MPI_Gather | ✔️ | mpi::comm([COMM])->dest([RANK])->gather([VALUE], [BUCKET]) |
|
MPI_Gatherv | ❌ | ||
MPI_Get | ❌ | ||
MPI_Get_accumulate | ❌ | ||
MPI_Get_address | ❌ | ||
MPI_Get_count | ❌ | ||
MPI_Get_elements | ❌ | ||
MPI_Get_elements_x | ❌ | ||
MPI_Get_library_version | ❌ | ||
MPI_Get_processor_name | ✔️ | mpi::processor_name() |
|
MPI_Get_version | ✔️ | mpi::version() with members: .version() and .subversion() |
|
MPI_Graph_create | ❌ | ||
MPI_Graph_get | ❌ | ||
MPI_Graph_map | ❌ | ||
MPI_Graph_neighbors | ❌ | ||
MPI_Graph_neighbors_count | ❌ | ||
MPI_Graphdims_get | ❌ | ||
MPI_Grequest_complete | ❌ | ||
MPI_Grequest_start | ❌ | ||
MPI_Group_compare | ❌ | ||
MPI_Group_difference | ❌ | ||
MPI_Group_excl | ❌ | ||
MPI_Group_free | ❌ | ||
MPI_Group_incl | ❌ | ||
MPI_Group_intersection | ❌ | ||
MPI_Group_range_excl | ❌ | ||
MPI_Group_range_incl | ❌ | ||
MPI_Group_rank | ❌ | ||
MPI_Group_size | ❌ | ||
MPI_Group_translate_ranks | ❌ | ||
MPI_Group_union | ❌ | ||
MPI_Iallgather | ✔️ | mpi::comm([COMM])->iallgather([VALUE], [BUCKET]) |
|
MPI_Iallgatherv | ❌ | ||
MPI_Iallreduce | ✔️ | mpi::comm([COMM])->iallreduce([VALUE], [BUCKET], [OP]) |
|
MPI_Ialltoall | ✔️ | mpi::comm([COMM])->ialltoall([VALUE], [BUCKET], [CHUNKSIZE]) |
|
MPI_Ialltoallv | ❌ | ||
MPI_Ialltoallw | ❌ | ||
MPI_Ibarrier | ✔️ | mpi::comm([COMM])->ibarrier() |
|
MPI_Ibcast | ✔️ | mpi::comm([COMM])->source([RANK])->ibcast[VALUE]) |
|
MPI_Ibsend | 🚫 | Will not be implemented because raw memory management is required. | |
MPI_Iexscan | ❌ | ||
MPI_Igather | ✔️ | mpi::comm([COMM])->dest([RANK])->igather([VALUE], [BUCKET]) |
|
MPI_Igatherv | ❌ | ||
MPI_Improbe | ❌ | ||
MPI_Imrecv | ❌ | ||
MPI_Ineighbor_allgather | ❌ | ||
MPI_Ineighbor_allgatherv | ❌ | ||
MPI_Ineighbor_alltoall | ❌ | ||
MPI_Ineighbor_alltoallv | ❌ | ||
MPI_Ineighbor_alltoallw | ❌ | ||
MPI_Info_create | ❌ | ||
MPI_Info_delete | ❌ | ||
MPI_Info_dup | ❌ | ||
MPI_Info_free | ❌ | ||
MPI_Info_get | ❌ | ||
MPI_Info_get_nkeys | ❌ | ||
MPI_Info_get_nthkey | ❌ | ||
MPI_Info_get_valuelen | ❌ | ||
MPI_Info_set | ❌ | ||
MPI_Init | ✔️ | mpi::mpi init(argc, argv) |
|
MPI_Init_thread | ❌ | ||
MPI_Initialized | ✔️ | mpi::initialized() |
|
MPI_Intercomm_create | ❌ | ||
MPI_Intercomm_merge | ❌ | ||
MPI_Iprobe | ❌ | ||
MPI_Irecv | ✔️ | mpi::comm([COMM])->source([RANK])->irecv([BUCKET]) |
|
MPI_Ireduce | ✔️ | mpi::comm([COMM])->dest([RANK])->ireduce([VALUE], [BUCKET], [OP]) |
|
MPI_Ireduce_scatter | ❌ | ||
MPI_Ireduce_scatter_block | ❌ | ||
MPI_Irsend | ✔️ | mpi::comm([COMM])->dest([RANK])->irsend([VALUE]) |
|
MPI_Is_thread_main | ❌ | ||
MPI_Iscan | ❌ | ||
MPI_Iscatter | ✔️ | mpi::comm([COMM])->source([RANK])->iscatter([VALUE], [CHUNKSIZE]) |
|
MPI_Iscatterv | ❌ | ||
MPI_Isend | ✔️ | mpi::comm([COMM])->dest([RANK])->isend([VALUE]) |
|
MPI_Issend | ✔️ | mpi::comm([COMM])->dest([RANK])->issend([VALUE]) |
|
MPI_Keyval_create | ❌ | ||
MPI_Keyval_free | ❌ | ||
MPI_Lookup_name | ❌ | ||
MPI_Mprobe | ❌ | ||
MPI_Mrecv | ❌ | ||
MPI_Neighbor_allgather | ❌ | ||
MPI_Neighbor_allgatherv | ❌ | ||
MPI_Neighbor_alltoall | ❌ | ||
MPI_Neighbor_alltoallv | ❌ | ||
MPI_Neighbor_alltoallw | ❌ | ||
MPI_Op_commute | ✔️ | .commutes() on the mpi::op object. |
|
MPI_Op_create | ✔️ | mpi::make_op<T>([LAMBDA]) or mpi::make_op<T>(mpi::wrap<T,[FUNC]>) ¹ |
|
MPI_Op_free | ✔️ | Automatically called by the mpi::make_op object (mpi::op ). |
|
MPI_Open_port | ❌ | ||
MPI_Pack | ❌ | ||
MPI_Pack_external | ❌ | ||
MPI_Pack_external_size | ❌ | ||
MPI_Pack_size | ❌ | ||
MPI_Pcontrol | ❌ | ||
MPI_Probe | ❌ | ||
MPI_Publish_name | ❌ | ||
MPI_Put | ❌ | ||
MPI_Query_thread | ❌ | ||
MPI_Raccumulate | ❌ | ||
MPI_Recv | ✔️ | mpi::comm([COMM])->source([RANK])->recv([BUCKET]) |
|
MPI_Recv_init | ❌ | ||
MPI_Reduce | ✔️ | mpi::comm([COMM])->dest([RANK])->reduce([VALUE], [BUCKET], [OP]) |
|
MPI_Reduce_local | ✔️ | mpi::reduce([VALUE], [BUCKET], [OP]) |
|
MPI_Reduce_scatter | ❌ | ||
MPI_Reduce_scatter_block | ❌ | ||
MPI_Register_datarep | ❌ | ||
MPI_Request_free | ❌ | ||
MPI_Request_get_status | ❌ | ||
MPI_Rget | ❌ | ||
MPI_Rget_accumulate | ❌ | ||
MPI_Rput | ❌ | ||
MPI_Rsend | ✔️ | mpi::comm([COMM])->dest([RANK])->rsend([VALUE]) |
|
MPI_Rsend_init | ❌ | ||
MPI_Scan | ❌ | ||
MPI_Scatter | ✔️ | mpi::comm([COMM])->source([RANK])->scatter([VALUE], [CHUNKSIZE]) |
|
MPI_Scatterv | ❌ | ||
MPI_Send | ✔️ | mpi::comm([COMM])->dest([RANK])->send([VALUE]) |
|
MPI_Send_init | ❌ | ||
MPI_Sendrecv | ✔️ | mpi::comm([COMM])->source([RANK])->dest([RANK])->sendrecv([VALUE], [BUCKET]) |
|
MPI_Sendrecv_replace | ✔️ | mpi::comm([COMM])->source([RANK])->dest([RANK])->sendrecv_replace([VALUE]) ² |
|
MPI_Ssend | ✔️ | mpi::comm([COMM])->dest([RANK])->ssend([VALUE]) |
|
MPI_Ssend_init | ❌ | ||
MPI_Start | ❌ | ||
MPI_Startall | ❌ | ||
MPI_Status_set_cancelled | ❌ | ||
MPI_Status_set_elements | ❌ | ||
MPI_Status_set_elements_x | ❌ | ||
MPI_T_category_changed | ❌ | ||
MPI_T_category_get_categories | ❌ | ||
MPI_T_category_get_cvars | ❌ | ||
MPI_T_category_get_info | ❌ | ||
MPI_T_category_get_num | ❌ | ||
MPI_T_category_get_pvars | ❌ | ||
MPI_T_cvar_get_info | ❌ | ||
MPI_T_cvar_get_num | ❌ | ||
MPI_T_cvar_handle_alloc | ❌ | ||
MPI_T_cvar_handle_free | ❌ | ||
MPI_T_cvar_read | ❌ | ||
MPI_T_cvar_write | ❌ | ||
MPI_T_enum_get_info | ❌ | ||
MPI_T_enum_get_item | ❌ | ||
MPI_T_finalize | ❌ | ||
MPI_T_init_thread | ❌ | ||
MPI_T_pvar_get_info | ❌ | ||
MPI_T_pvar_get_num | ❌ | ||
MPI_T_pvar_handle_alloc | ❌ | ||
MPI_T_pvar_handle_free | ❌ | ||
MPI_T_pvar_read | ❌ | ||
MPI_T_pvar_readreset | ❌ | ||
MPI_T_pvar_reset | ❌ | ||
MPI_T_pvar_session_create | ❌ | ||
MPI_T_pvar_session_free | ❌ | ||
MPI_T_pvar_start | ❌ | ||
MPI_T_pvar_stop | ❌ | ||
MPI_T_pvar_write | ❌ | ||
MPI_Test | ✔️ | .test() on the mpi::request object, or mpi::test([REQUEST]) . |
|
MPI_Test_cancelled | ❌ | ||
MPI_Testall | ✔️ | mpi::testall([REQUEST], ...) , or mpi::testall([REQUEST_VECTOR]) |
|
MPI_Testany | ✔️ | mpi::testany([REQUEST], ...) , or mpi::testany([REQUEST_VECTOR]) |
|
MPI_Testsome | ✔️ | mpi::testsome([REQUEST], ...) , or mpi::testsome([REQUEST_VECTOR]) |
|
MPI_Topo_test | ❌ | ||
MPI_Type_commit | ❌ | ||
MPI_Type_contiguous | ❌ | ||
MPI_Type_create_darray | ❌ | ||
MPI_Type_create_hindexed | ❌ | ||
MPI_Type_create_hindexed_block | ❌ | ||
MPI_Type_create_hvector | ❌ | ||
MPI_Type_create_indexed_block | ❌ | ||
MPI_Type_create_keyval | ❌ | ||
MPI_Type_create_resized | ❌ | ||
MPI_Type_create_struct | ❌ | ||
MPI_Type_create_subarray | ❌ | ||
MPI_Type_delete_attr | ❌ | ||
MPI_Type_dup | ❌ | ||
MPI_Type_extent | ❌ | ||
MPI_Type_free | ❌ | ||
MPI_Type_free_keyval | ❌ | ||
MPI_Type_get_attr | ❌ | ||
MPI_Type_get_contents | ❌ | ||
MPI_Type_get_envelope | ❌ | ||
MPI_Type_get_extent | ❌ | ||
MPI_Type_get_extent_x | ❌ | ||
MPI_Type_get_name | ❌ | ||
MPI_Type_get_true_extent | ❌ | ||
MPI_Type_get_true_extent_x | ❌ | ||
MPI_Type_hindexed | ❌ | ||
MPI_Type_hvector | ❌ | ||
MPI_Type_indexed | ❌ | ||
MPI_Type_lb | ❌ | ||
MPI_Type_match_size | ❌ | ||
MPI_Type_set_attr | ❌ | ||
MPI_Type_set_name | ❌ | ||
MPI_Type_size | ❌ | ||
MPI_Type_size_x | ❌ | ||
MPI_Type_struct | ❌ | ||
MPI_Type_ub | ❌ | ||
MPI_Type_vector | ❌ | ||
MPI_Unpack | ❌ | ||
MPI_Unpack_external | ❌ | ||
MPI_Unpublish_name | ❌ | ||
MPI_Wait | ✔️ | .wait() on the mpi::request object, or mpi::wait([REQUEST]) . |
|
MPI_Waitall | ✔️ | mpi::waitall([REQUEST], ...) , or mpi::waitall([REQUEST_VECTOR]) |
|
MPI_Waitany | ✔️ | mpi::waitany([REQUEST], ...) , or mpi::waitany([REQUEST_VECTOR]) |
|
MPI_Waitsome | ✔️ | mpi::waitsome([REQUEST], ...) , or mpi::waitsome([REQUEST_VECTOR]) |
|
MPI_Win_allocate | ❌ | ||
MPI_Win_allocate_shared | ❌ | ||
MPI_Win_attach | ❌ | ||
MPI_Win_call_errhandler | ❌ | ||
MPI_Win_complete | ❌ | ||
MPI_Win_create | ❌ | ||
MPI_Win_create_dynamic | ❌ | ||
MPI_Win_create_errhandler | ❌ | ||
MPI_Win_create_keyval | ❌ | ||
MPI_Win_delete_attr | ❌ | ||
MPI_Win_detach | ❌ | ||
MPI_Win_fence | ❌ | ||
MPI_Win_flush | ❌ | ||
MPI_Win_flush_all | ❌ | ||
MPI_Win_flush_local | ❌ | ||
MPI_Win_flush_local_all | ❌ | ||
MPI_Win_free | ❌ | ||
MPI_Win_free_keyval | ❌ | ||
MPI_Win_get_attr | ❌ | ||
MPI_Win_get_errhandler | ❌ | ||
MPI_Win_get_group | ❌ | ||
MPI_Win_get_info | ❌ | ||
MPI_Win_get_name | ❌ | ||
MPI_Win_lock | ❌ | ||
MPI_Win_lock_all | ❌ | ||
MPI_Win_post | ❌ | ||
MPI_Win_set_attr | ❌ | ||
MPI_Win_set_errhandler | ❌ | ||
MPI_Win_set_info | ❌ | ||
MPI_Win_set_name | ❌ | ||
MPI_Win_shared_query | ❌ | ||
MPI_Win_start | ❌ | ||
MPI_Win_sync | ❌ | ||
MPI_Win_test | ❌ | ||
MPI_Win_unlock | ❌ | ||
MPI_Win_unlock_all | ❌ | ||
MPI_Win_wait | ❌ | ||
MPI_Wtick | ❌ | ||
MPI_Wtime | ❌ |
¹ MPI takes a special function signature for its operations, which is annoying to create. mpiwrap thus provides a proxy object (mpi::op
) for generating this signature from a binary operation. This proxy is created by calling mpi::make_op
with either a pure lambda, a functor, or a wrapped C++ function pointer. Unfortunately due to the way C++ function pointers interact with C function pointers, we are limited to these three options. Similar to the MPI version, mpi::make_op
can be provided a commute
setting, which has a standard value of false
.
² MPI implements a separate MPI_Sendrecv_replace
function, which does not support container resizing when called. Therefore, mpiwrap does not use it this function, instead the arguments are rerouted to mpi::sendrecv
in order to allow the proper resizing behaviour.