Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CUDA/HIP implementations of reduction operators #12569

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

devreal
Copy link
Contributor

@devreal devreal commented May 23, 2024

This is the second part #12318, which provides the device-side reduction operators and adds stream semantics to ompi_op_reduce.

As usual, the operators are generated from macros. Function pointers to kernel launch functions are stored inside the ompi_op_t as a pointer to a struct that is filled if accelerator support is available.

There are two pieces to the cuda/hip implementation:

  1. A small wrapper that takes care of local configuration specificities (type sizes in Fortran, for example) and that handles OMPI/OPAL types.
  2. An implementation that is agnostic of OMPI/OPAL headers. We found that some vendor compilers didn't like pulling in OMPI/OPAL headers so we had to split.

Currently not supported are short float and long double since they are either not supported everywhere or not standardized. I hope I caught all other types, including pair types for loc functions. Since the implementations are agnostic of OMPI/OPAL headers, the code has to map the fortran types to C types in the implementation.

The device_op_pre and device_op_post functions are there to setup the environment for the kernel, including allocating memory on the device if one of the inputs is not on the chosen device. Operators cannot return an error so whatever the caller feeds us we have to eat. Not pretty, but hopefully better than aborting.

This branch requires #12356. I will rebase once that is merged.

Questions:

  1. Should the rocm component be renamed hip? (I'm afraid it should, @edgargabriel :D)
  2. How do people feel about generating the hip component from the cuda component using hipify+sed scripts? We'd alway require hipify to be available and I could use some help integrating that into the build system but there really isn't much difference between the two.

ompi/mca/op/rocm/Makefile.am Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm.h Outdated Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm.h Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm.h Outdated Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm_component.c Outdated Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm_functions.c Outdated Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm_functions.c Outdated Show resolved Hide resolved
ompi/mca/op/rocm/op_rocm_functions.c Show resolved Hide resolved
}

if (MCA_ACCELERATOR_NO_DEVICE_ID == target_device) {
opal_accelerator.mem_release_stream(device, target, stream);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just as a thought for a subsequent PR, we could get rid of the mem_alloc and mem_release functions in the accelerator framework interfaces and have only the stream based version, with the default stream being used if no stream argument has been provided by the user. This would reduce the API functions a bit and avoid nearly identical code.

sources = op_rocm_component.c op_rocm.h op_rocm_functions.c op_rocm_impl.h
rocm_sources = op_rocm_impl.hip

HIPCC = hipcc
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we might have to change that in the near future. hipcc is going away, we should be using amdclang with --offload-arch arguments. Its ok to leave it for now as is.

#define xstr(x) #x
#define str(x) xstr(x)

#define CHECK(fn, args) \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't abort inside the software stack.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proposal: add a return value to the internal operator API and wrap user-defined operators that don't provide a return. That adds quite a bit of churn to this PR and touches many more places. Maybe that should be a separate change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bosilca Are you ok with deferring the change of the internal operator API to a separate PR and leaving the abort in for now?

ompi/mca/op/base/op_base_op_select.c Show resolved Hide resolved
ompi/mca/op/cuda/Makefile.am Outdated Show resolved Hide resolved
ompi/mca/op/cuda/op_cuda_functions.c Outdated Show resolved Hide resolved

# -o $($@.o:.lo)

# Open MPI components can be compiled two ways:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is especially not true for this component, it can only be built dynamically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator support should only be built dynamically? @edgargabriel suggested that they should be made dynamic by default but should we disallow static building entirely?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I correctly understand, allowing static builds forces libompi.so to have a dependency on CUDA. This will break the build on non-CUDA machines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The accelerator components are dynamic-by-default (#12055) but I couldn't find a similar mechanism for OMPI. We should still allow building the ops statically, for those who know what they are doing.

Copy link
Member

@bosilca bosilca Jun 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As soon as a component calls into libcuda (or more precisely in this case libcudart) it never be built statically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why that is. The OMPI library would have to be linked against libcudart but that's possible if you build for a CUDA environment specifically.

I marked the two op modules as dso-by-default now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for the sake of it, I build ompi/main with CUDA from scratch, and now the dependency on libcudart exists everywhere, including ompi_info.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is broken on main, this branch doesn't change that.

ompi/mca/op/cuda/op_cuda_impl.cu Outdated Show resolved Hide resolved
const int stride = blockDim.x * gridDim.x; \
for (int i = index; i < n/vlen; i += stride) { \
vtype vin = ((vtype*)in)[i]; \
vtype vinout = ((vtype*)inout)[i]; \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you use the templated op defined earlier in the file ? Or if you don't need it you should remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am reworking the vectorization to make it more flexible and avoid some of the stuff I had to do to map the fixed size integers onto vectors of variable size integers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reworked the vectorization with a custom type and some template work. The goal now is to consistently have 128bit loads and stores.

ompi/mca/op/cuda/op_cuda_impl.cu Show resolved Hide resolved
/** Function pointers for all the different datatypes to be used
with the MPI_Op that this module is used with */
ompi_op_base_handler_fn_1_0_0_t opm_fns[OMPI_OP_BASE_TYPE_MAX];
ompi_op_base_3buff_handler_fn_1_0_0_t opm_3buff_fns[OMPI_OP_BASE_TYPE_MAX];
union {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overly complicated, but I can't think of anything significantly better right now.

ompi/op/op.h Show resolved Hide resolved
@devreal devreal force-pushed the op-accel-kernels branch 2 times, most recently from 3ab3371 to 3e4425d Compare June 19, 2024 12:48
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wgnu-zero-variadic-macro-arguments"

static inline void device_op_pre(const void *orig_source1,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bosilca @edgargabriel
If the device_op_pre and device_op_post use the accelerator framework they are pretty much independent of the the model (minus the last two lines). I wonder whether they should be moved to a header in base/ and shared between the two implementations.

The last two lines can be taken out and put into the op macro from where they are called.

@devreal devreal force-pushed the op-accel-kernels branch from 3d1ef9c to 6a85957 Compare June 20, 2024 01:00
@devreal devreal force-pushed the op-accel-kernels branch 2 times, most recently from f689d6d to 25c24c9 Compare June 20, 2024 16:07
@bosilca
Copy link
Member

bosilca commented Jun 21, 2024

Let me add some generic comment here, mostly as a reminder to self. The reason is that I don't think this is how we should use these op modules, especially not with accelerators. In my vision we decide once and for all, for each operation (or collective) what MPI_Op we will use, and we will stay with it for the entire duration. First, because there is no reason to execute half of the MPI_Op on the host and the other half on the device, it is all or none. Second, because we definitely don't want to start each kernel independently, the overhead will be just too costly, annihilating most of the benefits. Instead, once we start a collective, we would start a "service" bound to a specific context (GPU or CPU) and this service will remain active for as long as we are in a collective that needs GPU op, removing all costs related to the kernel submission. Instead, the GPU threads will poll into a well defined memory location for work updates, and the CPU will post new ops in this queue.

The only drawback I can see is that the service will take resources from the application, but this loss is very small, as a single (or two SM) are more than enough to saturate the network bandwidth. Once, we are outside collectives requiring GPU op, we can release these resources back to the application.

@devreal devreal force-pushed the op-accel-kernels branch from 1ed9579 to b56ab0a Compare July 1, 2024 17:46
@devreal
Copy link
Contributor Author

devreal commented Jul 2, 2024

The CUDA test fails because we detect CUDA but NVCC is not available (at least in PATH). We'll need to check for NVCC to be available and bail out if not. Ideally, we can make NVCC available in CI as well.

@devreal devreal force-pushed the op-accel-kernels branch 2 times, most recently from 4d73198 to a4a84f5 Compare July 8, 2024 18:59
config/opal_check_nvcc.m4 Outdated Show resolved Hide resolved
@devreal
Copy link
Contributor Author

devreal commented Jul 9, 2024

I updated the PR to have precious variables NVCC, HIPCC, NVCCFLAGS, and HIPCCFLAGS so that they can be specified on the command line and show up at the bottom ./configure --help. I will squash all changes and rebase once approved.

devreal and others added 6 commits September 15, 2024 19:49
The operators are generated from macros. Function pointers to
kernel launch functions are stored inside the ompi_op_t as a
pointer to a struct that is filled if accelerator support is available.

The ompi_op* API is extended to include versions taking streams and device
IDs to allow enqueuing operators on streams. The old functions map
to the stream versions with a NULL stream.

Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
CUDA provides only limited vector widths and only for variable
width integer types. We use our own vector type and some C++
templates to get more flexible vectors. We aim to get 128bit loads
by adjusting the width based on the type size.

Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
Signed-off-by: Joseph Schuchart <joseph.schuchart@stonybrook.edu>
# try to find nvcc in PATH
[AC_PATH_PROG([NVCC], [nvcc], [])])

# disable ussage of NVCC if explicitly specified
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# disable ussage of NVCC if explicitly specified
# disable usage of NVCC if explicitly specified

Copy link
Member

@bosilca bosilca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to the comments I left on the PR I have one issue with the lazy initialization part. In general it was a good idea to delay the expensive, but necessary initialization, until we know we need it. Fair. However, here we don't even know we can support it, so that module will always be loaded and on our way. Basically, we have no way of removing it if the lazy initialization fails.

int num_devices;
int rc;
// TODO: is this init needed here?
cuInit(0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this part should be only once for all CUDA related components. We might need to move it into the common.

} else {
/* copy from one device to another device */
/* TODO: does this actually work? Can we enable P2P? */
CHECK(cuMemcpyDtoDAsync, ((CUdeviceptr)*source2, (CUdeviceptr)orig_source2, nbytes, *(CUstream*)stream->stream));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking more about this I realized that this entire logic needs to be changed. I see three cases:

  1. data located on different GPU belonging to the same process: manually copying the data upfront is bad for performance, GPUs are really good at doing this automatically, especially in the context of the same process.
  2. data located on different GPU belonging to the different processes: we don't cover that case yet as it will require different reduction algorithms (as this capability would remove one explicit communication).
  3. data located on main memory: here we only need to explicitly copy if the GPU does not have direct access to the data. We can determine this using the VMM patch that made it into main few days ago.

@devreal
Copy link
Contributor Author

devreal commented Oct 1, 2024

As I said earlier: please provide a patch for changes you want. I've run out of time to spend on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants