Skip to content

Order of parallel communication #252

@talbring

Description

@talbring

While working on the turbo features Salvo and I noticed that there is currently a problem with the order of the communication (i.e. the order in which we loop through the SEND_RECEIVE markers.)

In the code we always have loops like this

for (iMarker = 0; iMarker < config->GetnMarker_All(); iMarker++) {

    if ((config->GetMarker_All_KindBC(iMarker) == SEND_RECEIVE) &&
        (config->GetMarker_All_SendRecv(iMarker) > 0)) {

...

This way the send/receive involving the periodic boundaries is always done before the send/receive involving the boundaries related to the parallel partitioning (this is because the periodic structure is created before the partitioning is done). Hence the wrong values are send to the periodic ghost cells.

An easy solution is to change the order of the loop, i.e.

for (iMarker = config->GetnMarker_All() - 1; iMarker >= 0; iMarker--) {

    if ((config->GetMarker_All_KindBC(iMarker) == SEND_RECEIVE) &&
        (config->GetMarker_All_SendRecv(iMarker) > 0)) {

...

Since I am not completely sure whether this is a valid solution (I dont know if this gives performance issues or will break something else) I opened this as an issue.

Related to this I would also suggest to maybe implement a general send/recv routine that every class can use, because all these SendReceive_* and Set_MPI_* routines do essentially the same at the moment (just with different data).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions