From 3d90efa837fd85f838ca3920ee6f62b9d8b5b913 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=C3=A1ty=C3=A1s=20Aradi?= Date: Tue, 6 Aug 2024 14:02:23 +0200 Subject: [PATCH] Update driver API --- .wordlist.txt | 12 +- docs/how-to/hip_porting_driver_api.md | 304 --------------- docs/how-to/hip_porting_driver_api.rst | 480 ++++++++++++++++++++++++ docs/index.md | 1 + docs/reference/driver_api_reference.rst | 33 ++ docs/sphinx/_toc.yml.in | 1 + 6 files changed, 526 insertions(+), 305 deletions(-) delete mode 100644 docs/how-to/hip_porting_driver_api.md create mode 100644 docs/how-to/hip_porting_driver_api.rst create mode 100644 docs/reference/driver_api_reference.rst diff --git a/.wordlist.txt b/.wordlist.txt index b3161fa928..2488764c1d 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -1,3 +1,4 @@ +.hip_fatbin ALU ALUs AmgX @@ -15,9 +16,12 @@ Builtins CAS clr coroutines +Ctx cuBLASLt cuCtx +CUDA's cuDNN +cuModule dataflow deallocate decompositions @@ -35,6 +39,7 @@ enum enums embeded extern +fatbin fatbinary foundationally frontends @@ -46,13 +51,17 @@ GPGPU GWS hardcoded HC +hcBLAS +HIP-Clang HIP's hipcc +hipCtx hipexamine hipified +hipModule +hipModuleLaunchKernel hipother HIPRTC -hcBLAS icc IILE iGPU @@ -122,6 +131,7 @@ texel texels tradeoffs templated +toolkits typedefs unintuitive UMM diff --git a/docs/how-to/hip_porting_driver_api.md b/docs/how-to/hip_porting_driver_api.md deleted file mode 100644 index 57879264a2..0000000000 --- a/docs/how-to/hip_porting_driver_api.md +++ /dev/null @@ -1,304 +0,0 @@ -# Porting CUDA driver API - -## Introduction to the CUDA Driver and Runtime APIs - -CUDA provides a separate CUDA Driver and Runtime APIs. The two APIs have significant overlap in functionality: - -* Both APIs support events, streams, memory management, memory copy, and error handling. -* Both APIs deliver similar performance. -* Driver APIs calls begin with the prefix `cu` while Runtime APIs begin with the prefix `cuda`. For example, the Driver API API contains `cuEventCreate` while the Runtime API contains `cudaEventCreate`, with similar functionality. -* The Driver API defines a different but largely overlapping error code space than the Runtime API, and uses a different coding convention. For example, Driver API defines `CUDA_ERROR_INVALID_VALUE` while the Runtime API defines `cudaErrorInvalidValue` - -The Driver API offers two additional pieces of functionality not provided by the Runtime API: `cuModule` and `cuCtx` APIs. - -### `cuModule` API - -The Module section of the Driver API provides additional control over how and when accelerator code objects are loaded. -For example, the driver API allows code objects to be loaded from files or memory pointers. -Symbols for kernels or global data can be extracted from the loaded code objects. -In contrast, the Runtime API automatically loads and (if necessary) compiles all of the kernels from an executable binary when run. -In this mode, NVCC must be used to compile kernel code so the automatic loading can function correctly. - -Both Driver and Runtime APIs define a function for launching kernels (called `cuLaunchKernel` or `cudaLaunchKernel`. -The kernel arguments and the execution configuration (grid dimensions, group dimensions, dynamic shared memory, and stream) are passed as arguments to the launch function. -The Runtime additionally provides the `<<< >>>` syntax for launching kernels, which resembles a special function call and is easier to use than explicit launch API (in particular with respect to handling of kernel arguments). -However, this syntax is not standard C++ and is available only when NVCC is used to compile the host code. - -The Module features are useful in an environment which generates the code objects directly, such as a new accelerator language front-end. -Here, NVCC is not used. Instead, the environment may have a different kernel language or different compilation flow. -Other environments have many kernels and do not want them to be all loaded automatically. -The Module functions can be used to load the generated code objects and launch kernels. -As we will see below, HIP defines a Module API which provides similar explicit control over code object management. - -### `cuCtx` API - -The Driver API defines "Context" and "Devices" as separate entities. -Contexts contain a single device, and a device can theoretically have multiple contexts. -Each context contains a set of streams and events specific to the context. -Historically contexts also defined a unique address space for the GPU, though this may no longer be the case in Unified Memory platforms (since the CPU and all the devices in the same process share a single unified address space). -The Context APIs also provide a mechanism to switch between devices, which allowed a single CPU thread to send commands to different GPUs. -HIP as well as a recent versions of CUDA Runtime provide other mechanisms to accomplish this feat - for example using streams or `cudaSetDevice`. - -The CUDA Runtime API unifies the Context API with the Device API. This simplifies the APIs and has little loss of functionality since each Context can contain a single device, and the benefits of multiple contexts has been replaced with other interfaces. -HIP provides a context API to facilitate easy porting from existing Driver codes. -In HIP, the `Ctx` functions largely provide an alternate syntax for changing the active device. - -Most new applications will prefer to use `hipSetDevice` or the stream APIs, therefore HIP has marked `hipCtx` APIs as **deprecated**. Support for these APIs may not be available in future releases. For more details on deprecated APIs please refer [HIP deprecated APIs](../reference/deprecated_api_list). - -## HIP Module and `Ctx` APIs - -Rather than present two separate APIs, HIP extends the HIP API with new APIs for Modules and `Ctx` control. - -### `hipModule` API - -Like the CUDA Driver API, the Module API provides additional control over how code is loaded, including options to load code from files or from in-memory pointers. -NVCC and HIP-Clang target different architectures and use different code object formats: NVCC is `cubin` or `ptx` files, while the HIP-Clang path is the `hsaco` format. -The external compilers which generate these code objects are responsible for generating and loading the correct code object for each platform. -Notably, there is not a fat binary format that can contain code for both NVCC and HIP-Clang platforms. The following table summarizes the formats used on each platform: - -| Format | APIs | NVCC | HIP-CLANG | -| --- | --- | --- | --- | -| Code Object | `hipModuleLoad`, `hipModuleLoadData` | `.cubin` or PTX text | `.hsaco` | -| Fat Binary | `hipModuleLoadFatBin` | `.fatbin` | `.hip_fatbin` | - -`hipcc` uses HIP-Clang or NVCC to compile host codes. Both of these may embed code objects into the final executable, and these code objects will be automatically loaded when the application starts. -The `hipModule` API can be used to load additional code objects, and in this way provides an extended capability to the automatically loaded code objects. -HIP-Clang allows both of these capabilities to be used together, if desired. Of course it is possible to create a program with no kernels and thus no automatic loading. - -### `hipCtx` API - -HIP provides a `Ctx` API as a thin layer over the existing Device functions. This `Ctx` API can be used to set the current context, or to query properties of the device associated with the context. -The current context is implicitly used by other APIs such as `hipStreamCreate`. - -### hipify translation of CUDA Driver API - -The HIPIFY tools convert CUDA Driver APIs for streams, events, modules, devices, memory management, context, profiler to the equivalent HIP driver calls. For example, `cuEventCreate` will be translated to `hipEventCreate`. -HIPIFY tools also convert error codes from the Driver namespace and coding convention to the equivalent HIP error code. Thus, HIP unifies the APIs for these common functions. - -The memory copy API requires additional explanation. The CUDA driver includes the memory direction in the name of the API (`cuMemcpyH2D`) while the CUDA driver API provides a single memory copy API with a parameter that specifies the direction and additionally supports a "default" direction where the runtime determines the direction automatically. -HIP provides APIs with both styles: for example, `hipMemcpyH2D` as well as `hipMemcpy`. -The first flavor may be faster in some cases since they avoid host overhead to detect the different memory directions. - -HIP defines a single error space, and uses camel-case for all errors (i.e. `hipErrorInvalidValue`). - -#### Address Spaces - -HIP-Clang defines a process-wide address space where the CPU and all devices allocate addresses from a single unified pool. -Thus addresses may be shared between contexts, and unlike the original CUDA definition a new context does not create a new address space for the device. - -#### Using `hipModuleLaunchKernel` - -`hipModuleLaunchKernel` is `cuLaunchKernel` in HIP world. It takes the same arguments as `cuLaunchKernel`. - -#### Additional Information - -* HIP-Clang creates a primary context when the HIP API is called. So in a pure driver API code, HIP-Clang will create a primary context while HIP/NVCC will have empty context stack. -HIP-Clang will push primary context to context stack when it is empty. This can have subtle differences on applications which mix the runtime and driver APIs. - -### `hip-clang` Implementation Notes - -#### `.hip_fatbin` - -hip-clang links device code from different translation units together. For each device target, a code object is generated. Code objects for different device targets are bundled by `clang-offload-bundler` as one fatbinary, which is embeded as a global symbol `__hip_fatbin` in the `.hip_fatbin` section of the ELF file of the executable or shared object. - -#### Initialization and Termination Functions - -hip-clang generates initialization and termination functions for each translation unit for host code compilation. The initialization functions call `__hipRegisterFatBinary` to register the fatbinary embeded in the ELF file. They also call `__hipRegisterFunction` and `__hipRegisterVar` to register kernel functions and device side global variables. The termination functions call `__hipUnregisterFatBinary`. -hip-clang emits a global variable `__hip_gpubin_handle` of void** type with linkonce linkage and inital value 0 for each host translation unit. Each initialization function checks `__hip_gpubin_handle` and register the fatbinary only if `__hip_gpubin_handle` is 0 and saves the return value of `__hip_gpubin_handle` to `__hip_gpubin_handle`. This is to guarantee that the fatbinary is only registered once. Similar check is done in the termination functions. - -#### Kernel Launching - -hip-clang supports kernel launching by CUDA `<<<>>>` syntax, hipLaunchKernelGGL. The latter one is macro which expand to CUDA `<<<>>>` syntax. - -When the executable or shared library is loaded by the dynamic linker, the initialization functions are called. In the initialization functions, when `__hipRegisterFatBinary` is called, the code objects containing all kernels are loaded; when `__hipRegisterFunction` is called, the stub functions are associated with the corresponding kernels in code objects. - -hip-clang implements two sets of kernel launching APIs. - -By default, in the host code, for the `<<<>>>` statement, hip-clang first emits call of `hipConfigureCall` to set up the threads and grids, then emits call of the stub function with the given arguments. In the stub function, `hipSetupArgument` is called for each kernel argument, then `hipLaunchByPtr` is called with a function pointer to the stub function. In `hipLaunchByPtr`, the real kernel associated with the stub function is launched. - -### NVCC Implementation Notes - -#### Interoperation between HIP and CUDA Driver - -CUDA applications may want to mix CUDA driver code with HIP code (see example below). This table shows the type equivalence to enable this interaction. - -|**HIP Type** |**CU Driver Type**|**CUDA Runtime Type**| -| ---- | ---- | ---- | -| `hipModule_t` | `CUmodule` | | -| `hipFunction_t` | `CUfunction` | | -| `hipCtx_t` | `CUcontext` | | -| `hipDevice_t` | `CUdevice` | | -| `hipStream_t` | `CUstream` | `cudaStream_t` | -| `hipEvent_t` | `CUevent` | `cudaEvent_t` | -| `hipArray` | `CUarray` | `cudaArray` | - -#### Compilation Options - -The `hipModule_t` interface does not support `cuModuleLoadDataEx` function, which is used to control PTX compilation options. -HIP-Clang does not use PTX and does not support these compilation options. -In fact, HIP-Clang code objects always contain fully compiled ISA and do not require additional compilation as a part of the load step. -The corresponding HIP function `hipModuleLoadDataEx` behaves as `hipModuleLoadData` on HIP-Clang path (compilation options are not used) and as `cuModuleLoadDataEx` on NVCC path. -For example (CUDA): - -```cpp -CUmodule module; -void *imagePtr = ...; // Somehow populate data pointer with code object - -const int numOptions = 1; -CUJit_option options[numOptions]; -void * optionValues[numOptions]; - -options[0] = CU_JIT_MAX_REGISTERS; -unsigned maxRegs = 15; -optionValues[0] = (void*)(&maxRegs); - -cuModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); - -CUfunction k; -cuModuleGetFunction(&k, module, "myKernel"); -``` - -HIP: - -```cpp -hipModule_t module; -void *imagePtr = ...; // Somehow populate data pointer with code object - -const int numOptions = 1; -hipJitOption options[numOptions]; -void * optionValues[numOptions]; - -options[0] = hipJitOptionMaxRegisters; -unsigned maxRegs = 15; -optionValues[0] = (void*)(&maxRegs); - -// hipModuleLoadData(module, imagePtr) will be called on HIP-Clang path, JIT options will not be used, and -// cupModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues) will be called on NVCC path -hipModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); - -hipFunction_t k; -hipModuleGetFunction(&k, module, "myKernel"); -``` - -The below sample shows how to use `hipModuleGetFunction`. - -```cpp -#include -#include -#include -#include -#include - -#define LEN 64 -#define SIZE LEN<<2 - -#ifdef __HIP_PLATFORM_AMD__ -#define fileName "vcpy_isa.co" -#endif - -#ifdef __HIP_PLATFORM_NVIDIA__ -#define fileName "vcpy_isa.ptx" -#endif - -#define kernel_name "hello_world" - -int main(){ - float *A, *B; - hipDeviceptr_t Ad, Bd; - A = new float[LEN]; - B = new float[LEN]; - - for(uint32_t i=0;iargBuffer(2); - memcpy(&argBuffer[0], &Ad, sizeof(void*)); - memcpy(&argBuffer[1], &Bd, sizeof(void*)); - - size_t size = argBuffer.size()*sizeof(void*); - - void *config[] = { - HIP_LAUNCH_PARAM_BUFFER_POINTER, &argBuffer[0], - HIP_LAUNCH_PARAM_BUFFER_SIZE, &size, - HIP_LAUNCH_PARAM_END - }; - - hipModuleLaunchKernel(Function, 1, 1, 1, LEN, 1, 1, 0, 0, NULL, (void**)&config); - - hipMemcpyDtoH(B, Bd, SIZE); - for(uint32_t i=0;i tex; - -__global__ void tex2dKernel(hipLaunchParm lp, float* outputData, - int width, - int height) -{ - int x = blockIdx.x*blockDim.x + threadIdx.x; - int y = blockIdx.y*blockDim.y + threadIdx.y; - outputData[y*width + x] = tex2D(tex, x, y); -} - -``` - -```cpp -// Host code: - -texture tex; - -void myFunc () -{ - // ... - - textureReference* texref; - hipModuleGetTexRef(&texref, Module1, "tex"); - hipTexRefSetAddressMode(texref, 0, hipAddressModeWrap); - hipTexRefSetAddressMode(texref, 1, hipAddressModeWrap); - hipTexRefSetFilterMode(texref, hipFilterModePoint); - hipTexRefSetFlags(texref, 0); - hipTexRefSetFormat(texref, HIP_AD_FORMAT_FLOAT, 1); - hipTexRefSetArray(texref, array, HIP_TRSA_OVERRIDE_FORMAT); - - // ... -} -``` diff --git a/docs/how-to/hip_porting_driver_api.rst b/docs/how-to/hip_porting_driver_api.rst new file mode 100644 index 0000000000..ae8d2fdeee --- /dev/null +++ b/docs/how-to/hip_porting_driver_api.rst @@ -0,0 +1,480 @@ +.. meta:: + :description: This chapter presents how to port the CUDA driver API and showcases equivalent operations in HIP. + :keywords: AMD, ROCm, HIP, CUDA, driver API + +.. _porting_driver_api: + +******************************************************************************* +Porting CUDA driver API +******************************************************************************* + +NVIDIA provides separate CUDA driver and runtime APIs. The two APIs have significant overlap in functionality: + +* Both APIs support events, streams, memory management, memory copy, and error handling. +* Both APIs deliver similar performance. +* Driver API calls begin with the prefix ``cu``, while runtime API calls begin with the prefix ``cuda``. For example, the driver API contains ``cuEventCreate``, while the runtime API contains ``cudaEventCreate``, which has similar functionality. +* The driver API defines a different, but largely overlapping, error code space than the runtime API and uses a different coding convention. For example, the driver API defines ``CUDA_ERROR_INVALID_VALUE``, while the runtime API defines ``cudaErrorInvalidValue``. + +The driver API offers two additional functionalities not provided by the runtime API: ``cuModule`` and ``cuCtx`` APIs. + +cuModule API +------------ + +The Module section of the driver API provides additional control over how and when accelerator code objects are loaded. For example, the driver API enables code objects to load from files or memory pointers. Symbols for kernels or global data are extracted from the loaded code objects. In contrast, the runtime API loads automatically and, if necessary, compiles all the kernels from an executable binary when it runs. In this mode, kernel code must be compiled using NVCC so that automatic loading can function correctly. + +The Module features are useful in an environment that generates the code objects directly, such as a new accelerator language front end. NVCC is not used here. Instead, the environment might have a different kernel language or compilation flow. Other environments have many kernels and don't want all of them to be loaded automatically. The Module functions load the generated code objects and launch kernels. Similar to the cuModule API, HIP defines a hipModule API that provides similar explicit control over code object management. + +cuCtx API +--------- + +The driver API defines "Context" and "Devices" as separate entities. +Contexts contain a single device, and a device can theoretically have multiple contexts. +Each context contains a set of streams and events specific to the context. +Historically, contexts also defined a unique address space for the GPU. This might no longer be the case in unified memory platforms, because the CPU and all the devices in the same process share a single unified address space. +The Context APIs also provide a mechanism to switch between devices, which enables a single CPU thread to send commands to different GPUs. +HIP and recent versions of the CUDA Runtime provide other mechanisms to accomplish this feat, for example, using streams or ``cudaSetDevice``. + +The CUDA runtime API unifies the Context API with the Device API. This simplifies the APIs and has little loss of functionality. This is because each context can contain a single device, and the benefits of multiple contexts have been replaced with other interfaces. +HIP provides a Context API to facilitate easy porting from existing Driver code. +In HIP, the ``Ctx`` functions largely provide an alternate syntax for changing the active device. + +Most new applications preferentially use ``hipSetDevice`` or the stream APIs. Therefore, HIP has marked the ``hipCtx`` APIs as **deprecated**. Support for these APIs might not be available in future releases. For more details on deprecated APIs, see :doc:`../reference/deprecated_api_list`. + +HIP module and Ctx APIs +======================= + +Rather than present two separate APIs, HIP extends the HIP API with new APIs for modules and ``Ctx`` control. + +hipModule API +------------- + +Like the CUDA driver API, the Module API provides additional control over how code is loaded, including options to load code from files or from in-memory pointers. +NVCC and HIP-Clang target different architectures and use different code object formats. NVCC supports ``cubin`` or ``ptx`` files, while the HIP-Clang path uses the ``hsaco`` format. +The external compilers which generate these code objects are responsible for generating and loading the correct code object for each platform. +Notably, there is no fat binary format that can contain code for both NVCC and HIP-Clang platforms. The following table summarizes the formats used on each platform: + +.. list-table:: Module formats + :header-rows: 1 + + * - Format + - APIs + - NVCC + - HIP-CLANG + * - Code object + - ``hipModuleLoad``, ``hipModuleLoadData`` + - ``.cubin`` or PTX text + - ``.hsaco`` + * - Fat binary + - ``hipModuleLoadFatBin`` + - ``.fatbin`` + - ``.hip_fatbin`` + +``hipcc`` uses HIP-Clang or NVCC to compile host code. Both of these compilers can embed code objects into the final executable. These code objects are automatically loaded when the application starts. +The ``hipModule`` API can be used to load additional code objects. When used this way, it extends the capability of the automatically loaded code objects. +HIP-Clang enables both of these capabilities to be used together. Of course, it is possible to create a program with no kernels and no automatic loading. + +hipCtx API +---------- + +HIP provides a ``Ctx`` API as a thin layer over the existing device functions. The ``Ctx`` API can be used to set the current context or to query properties of the device associated with the context. +The current context is implicitly used by other APIs, such as ``hipStreamCreate``. + +HIPIFY translation of CUDA driver API +------------------------------------- + +The HIPIFY tools convert CUDA driver APIs for streams, events, modules, devices, memory management, context, and the profiler to the equivalent HIP calls. For example, ``cuEventCreate`` is translated to ``hipEventCreate``. +HIPIFY tools also convert error codes from the driver namespace and coding conventions to the equivalent HIP error code. HIP unifies the APIs for these common functions. + +The memory copy API requires additional explanation. The CUDA driver includes the memory direction in the name of the API (``cuMemcpyH2D``), while the CUDA driver API provides a single memory copy API with a parameter that specifies the direction. It also supports a "default" direction where the runtime determines the direction automatically. +HIP provides APIs with both styles, for example, ``hipMemcpyH2D`` as well as ``hipMemcpy``. +The first version might be faster in some cases because it avoids any host overhead to detect the different memory directions. + +HIP defines a single error space and uses camel case for all errors (i.e. ``hipErrorInvalidValue``). + +For further information, visit the :doc:`hipify:index`. + +**Address spaces** + +HIP-Clang defines a process-wide address space where the CPU and all devices allocate addresses from a single unified pool. +This means addresses can be shared between contexts. Unlike the original CUDA implementation, a new context does not create a new address space for the device. + +**Using hipModuleLaunchKernel** + +Both CUDA driver and runtime APIs define a function for launching kernels, called ``cuLaunchKernel`` or ``cudaLaunchKernel``. The equivalent API in HIP is ``hipModuleLaunchKernel``. +The kernel arguments and the execution configuration (grid dimensions, group dimensions, dynamic shared memory, and stream) are passed as arguments to the launch function. +The runtime API additionally provides the ``<<< >>>`` syntax for launching kernels, which resembles a special function call and is easier to use than the explicit launch API, especially when handling kernel arguments. +However, this syntax is not standard C++ and is available only when NVCC is used to compile the host code. + +**Additional information** + +HIP-Clang creates a primary context when the HIP API is called. So, in pure driver API code, HIP-Clang creates a primary context while HIP/NVCC has an empty context stack. HIP-Clang pushes the primary context to the context stack when it is empty. This can lead to subtle differences in applications which mix the runtime and driver APIs. + +hip-clang implementation notes +------------------------------ + +**.hip_fatbin** + +HIP-Clang links device code from different translation units together. For each device target, it generates a code object. ``clang-offload-bundler`` bundles code objects for different device targets into one fat binary, which is embedded as the global symbol ``__hip_fatbin`` in the ``.hip_fatbin`` section of the ELF file of the executable or shared object. + +**Initialization and termination functions** + +HIP-Clang generates initialization and termination functions for each translation unit for host code compilation. The initialization functions call ``__hipRegisterFatBinary`` to register the fat binary embedded in the ELF file. They also call ``__hipRegisterFunction`` and ``__hipRegisterVar`` to register kernel functions and device-side global variables. The termination functions call ``__hipUnregisterFatBinary``. +HIP-Clang emits a global variable ``__hip_gpubin_handle`` of type ``void**`` with ``linkonce`` linkage and an initial value of 0 for each host translation unit. Each initialization function checks ``__hip_gpubin_handle`` and registers the fat binary only if ``__hip_gpubin_handle`` is 0. It saves the return value of ``__hip_gpubin_handle`` to ``__hip_gpubin_handle``. This is to guarantee that the fat binary is only registered once. A similar check is performed in the termination functions. + +**Kernel launching** + +HIP-Clang supports kernel launching using either the CUDA ``<<<>>>`` syntax, ``hipLaunchKernel``, or ``hipLaunchKernelGGL``. The last option is a macro which expands to the CUDA ``<<<>>>`` syntax by default. It can also be turned into a template by defining ``HIP_TEMPLATE_KERNEL_LAUNCH``. + +When the executable or shared library is loaded by the dynamic linker, the initialization functions are called. In the initialization functions, the code objects containing all kernels are loaded when ``__hipRegisterFatBinary`` is called. When ``__hipRegisterFunction`` is called, the stub functions are associated with the corresponding kernels in the code objects. + +HIP-Clang implements two sets of APIs for launching kernels. +By default, when HIP-Clang encounters the ``<<<>>>`` statement in the host code, it first calls ``hipConfigureCall`` to set up the threads and grids. It then calls the stub function with the given arguments. The stub function calls ``hipSetupArgument`` for each kernel argument, then calls ``hipLaunchByPtr`` with a function pointer to the stub function. In ``hipLaunchByPtr``, the real kernel associated with the stub function is launched. + +NVCC implementation notes +------------------------- + +**Interoperation between HIP and CUDA driver** + +CUDA applications might want to mix CUDA driver code with HIP code (see the example below). This table shows the equivalence between CUDA and HIP types required to implement this interaction. + +.. list-table:: Equivalence table between HIP and CUDA types + :header-rows: 1 + + * - HIP type + - CU Driver type + - CUDA Runtime type + * - ``hipModule_t`` + - ``CUmodule`` + - + * - ``hipFunction_t`` + - ``CUfunction`` + - + * - ``hipCtx_t`` + - ``CUcontext`` + - + * - ``hipDevice_t`` + - ``CUdevice`` + - + * - ``hipStream_t`` + - ``CUstream`` + - ``cudaStream_t`` + * - ``hipEvent_t`` + - ``CUevent`` + - ``cudaEvent_t`` + * - ``hipArray`` + - ``CUarray`` + - ``cudaArray`` + +**Compilation options** + +The ``hipModule_t`` interface does not support the ``cuModuleLoadDataEx`` function, which is used to control PTX compilation options. +HIP-Clang does not use PTX, so it does not support these compilation options. +In fact, HIP-Clang code objects always contain fully compiled code for a device-specific instruction set and do not require additional compilation as a part of the load step. +The corresponding HIP function ``hipModuleLoadDataEx`` behaves like ``hipModuleLoadData`` on the HIP-Clang path (where compilation options are not used) and like ``cuModuleLoadDataEx`` on the NVCC path. + +For example: + +.. tab-set:: + + .. tab-item:: HIP + + .. code-block:: cpp + + hipModule_t module; + void *imagePtr = ...; // Somehow populate data pointer with code object + + const int numOptions = 1; + hipJitOption options[numOptions]; + void *optionValues[numOptions]; + + options[0] = hipJitOptionMaxRegisters; + unsigned maxRegs = 15; + optionValues[0] = (void *)(&maxRegs); + + // hipModuleLoadData(module, imagePtr) will be called on HIP-Clang path, JIT + // options will not be used, and cupModuleLoadDataEx(module, imagePtr, + // numOptions, options, optionValues) will be called on NVCC path + hipModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); + + hipFunction_t k; + hipModuleGetFunction(&k, module, "myKernel"); + + .. tab-item:: CUDA + + .. code-block:: cpp + + CUmodule module; + void *imagePtr = ...; // Somehow populate data pointer with code object + + const int numOptions = 1; + CUJit_option options[numOptions]; + void *optionValues[numOptions]; + + options[0] = CU_JIT_MAX_REGISTERS; + unsigned maxRegs = 15; + optionValues[0] = (void *)(&maxRegs); + + cuModuleLoadDataEx(module, imagePtr, numOptions, options, optionValues); + + CUfunction k; + cuModuleGetFunction(&k, module, "myKernel"); + +The sample below shows how to use ``hipModuleGetFunction``. + +.. code-block:: cpp + + #include + #include + + #include + #include + #include + + #define LEN 64 + #define SIZE LEN << 2 + + #ifdef __HIP_PLATFORM_AMD__ + #define fileName "vcpy_isa.co" + #endif + + #ifdef __HIP_PLATFORM_NVIDIA__ + #define fileName "vcpy_isa.ptx" + #endif + + #define kernel_name "hello_world" + + int main() { + float *A, *B; + hipDeviceptr_t Ad, Bd; + A = new float[LEN]; + B = new float[LEN]; + + for (uint32_t i = 0; i < LEN; i++) { + A[i] = i * 1.0f; + B[i] = 0.0f; + std::cout << A[i] << " " << B[i] << std::endl; + } + + #ifdef __HIP_PLATFORM_NVIDIA__ + hipInit(0); + hipDevice_t device; + hipCtx_t context; + hipDeviceGet(&device, 0); + hipCtxCreate(&context, 0, device); + #endif + + hipMalloc((void **)&Ad, SIZE); + hipMalloc((void **)&Bd, SIZE); + + hipMemcpyHtoD(Ad, A, SIZE); + hipMemcpyHtoD(Bd, B, SIZE); + hipModule_t Module; + hipFunction_t Function; + hipModuleLoad(&Module, fileName); + hipModuleGetFunction(&Function, Module, kernel_name); + + std::vector argBuffer(2); + memcpy(&argBuffer[0], &Ad, sizeof(void *)); + memcpy(&argBuffer[1], &Bd, sizeof(void *)); + + size_t size = argBuffer.size() * sizeof(void *); + + void *config[] = {HIP_LAUNCH_PARAM_BUFFER_POINTER, &argBuffer[0], + HIP_LAUNCH_PARAM_BUFFER_SIZE, &size, HIP_LAUNCH_PARAM_END}; + + hipModuleLaunchKernel(Function, 1, 1, 1, LEN, 1, 1, 0, 0, NULL, + (void **)&config); + + hipMemcpyDtoH(B, Bd, SIZE); + for (uint32_t i = 0; i < LEN; i++) { + std::cout << A[i] << " - " << B[i] << std::endl; + } + + #ifdef __HIP_PLATFORM_NVIDIA__ + hipCtxDetach(context); + #endif + + return 0; + } + +HIP module and texture Driver API +================================= + +HIP supports texture driver APIs. However, texture references must be declared within the host scope. The following code demonstrates the use of texture references for the ``__HIP_PLATFORM_AMD__`` platform. + +.. code-block:: cpp + + // Code to generate code object + + #include "hip/hip_runtime.h" + extern texture tex; + + __global__ void tex2dKernel(hipLaunchParm lp, float *outputData, int width, + int height) { + int x = blockIdx.x * blockDim.x + threadIdx.x; + int y = blockIdx.y * blockDim.y + threadIdx.y; + outputData[y * width + x] = tex2D(tex, x, y); + } + +.. code-block:: cpp + + // Host code: + + texture tex; + + void myFunc () + { + // ... + + textureReference* texref; + hipModuleGetTexRef(&texref, Module1, "tex"); + hipTexRefSetAddressMode(texref, 0, hipAddressModeWrap); + hipTexRefSetAddressMode(texref, 1, hipAddressModeWrap); + hipTexRefSetFilterMode(texref, hipFilterModePoint); + hipTexRefSetFlags(texref, 0); + hipTexRefSetFormat(texref, HIP_AD_FORMAT_FLOAT, 1); + hipTexRefSetArray(texref, array, HIP_TRSA_OVERRIDE_FORMAT); + + // ... + } + +Driver Entry Point Access +========================= + +Starting from HIP version 6.2.0, support for Driver Entry Point Access is available when using CUDA 12.0 or newer. This feature allows developers to directly interact with the CUDA driver API, providing more control over GPU operations. + +Driver Entry Point Access provides several features: + +* Retrieving the address of a runtime function +* Requesting the default stream version on a per-thread basis +* Accessing new HIP features on older toolkits with a newer driver + +Address retrieval +----------------- + +The ``hipGetProcAddress`` function can be used to obtain the address of a runtime function. This is demonstrated in the following example: + +.. code-block:: cpp + + #include + #include + + #include + + typedef hipError_t (*hipInit_t)(unsigned int); + + int main() { + // Initialize the HIP runtime + hipError_t res = hipInit(0); + if (res != hipSuccess) { + std::cerr << "Failed to initialize HIP runtime." << std::endl; + return 1; + } + + // Get the address of the hipInit function + hipInit_t hipInitFunc; + int hipVersion = HIP_VERSION; // Use the HIP version defined in hip_runtime_api.h + uint64_t flags = 0; // No special flags + hipDriverProcAddressQueryResult symbolStatus; + + res = hipGetProcAddress("hipInit", (void**)&hipInitFunc, hipVersion, flags, &symbolStatus); + if (res != hipSuccess) { + std::cerr << "Failed to get address of hipInit()." << std::endl; + return 1; + } + + // Call the hipInit function using the obtained address + res = hipInitFunc(0); + if (res == hipSuccess) { + std::cout << "HIP runtime initialized successfully using hipGetProcAddress()." << std::endl; + } else { + std::cerr << "Failed to initialize HIP runtime using hipGetProcAddress()." << std::endl; + } + + return 0; + } + +Per-thread default stream version request +----------------------------------------- + +HIP offers functionality similar to CUDA for managing streams on a per-thread basis. By using ``hipStreamPerThread``, each thread can independently manage its default stream, simplifying operations. The following example demonstrates how this feature enhances performance by reducing contention and improving efficiency. + +.. code-block:: cpp + + #include + + #include + + int main() { + // Initialize the HIP runtime + hipError_t res = hipInit(0); + if (res != hipSuccess) { + std::cerr << "Failed to initialize HIP runtime." << std::endl; + return 1; + } + + // Get the per-thread default stream + hipStream_t stream = hipStreamPerThread; + + // Use the stream for some operation + // For example, allocate memory on the device + void* d_ptr; + size_t size = 1024; + res = hipMalloc(&d_ptr, size); + if (res != hipSuccess) { + std::cerr << "Failed to allocate memory." << std::endl; + return 1; + } + + // Perform some operation using the stream + // For example, set memory on the device + res = hipMemsetAsync(d_ptr, 0, size, stream); + if (res != hipSuccess) { + std::cerr << "Failed to set memory." << std::endl; + return 1; + } + + // Synchronize the stream + res = hipStreamSynchronize(stream); + if (res != hipSuccess) { + std::cerr << "Failed to synchronize stream." << std::endl; + return 1; + } + + std::cout << "Operation completed successfully using per-thread default stream." << std::endl; + + // Free the allocated memory + hipFree(d_ptr); + + return 0; + } + +Accessing new HIP features with a newer driver +---------------------------------------------- + +HIP is designed to be forward compatible, allowing newer features to be utilized with older toolkits, provided a compatible driver is present. Feature support can be verified through runtime API functions and version checks. This approach ensures that applications can benefit from new features and improvements in the HIP runtime without needing to be recompiled with a newer toolkit. The function ``hipGetProcAddress`` enables dynamic querying and the use of newer functions offered by the HIP runtime, even if the application was built with an older toolkit. + +An example is provided for a hypothetical ``foo()`` function. + +.. code-block:: cpp + + // Get the address of the foo function + foo_t fooFunc; + int hipVersion = 60300000; // Use an own HIP version number (e.g. 6.3.0) + uint64_t flags = 0; // No special flags + hipDriverProcAddressQueryResult symbolStatus; + + res = hipGetProcAddress("foo", (void**)&fooFunc, hipVersion, flags, &symbolStatus); + +The HIP version number is defined as an integer: + +.. code-block:: cpp + + HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 + HIP_VERSION_PATCH + +For further details, see :doc:`../how-to/faq`. + +Reference +========= + +For driver API reference, visit :ref:`driver_api_reference`. diff --git a/docs/index.md b/docs/index.md index a659b9b83a..8dbc2441e5 100644 --- a/docs/index.md +++ b/docs/index.md @@ -60,6 +60,7 @@ On non-AMD platforms, like NVIDIA, HIP provides header files required to support * [HSA runtime API for ROCm](./reference/virtual_rocr) * [C++ language extensions](./reference/cpp_language_extensions) * [C++ language support](./reference/cpp_language_support) +* [Driver API](./reference/driver_api_reference) * [HIP math API](./reference/math_api) * [Comparing syntax for different APIs](./reference/terms) * [List of deprecated APIs](./reference/deprecated_api_list) diff --git a/docs/reference/driver_api_reference.rst b/docs/reference/driver_api_reference.rst new file mode 100644 index 0000000000..22475e464b --- /dev/null +++ b/docs/reference/driver_api_reference.rst @@ -0,0 +1,33 @@ +.. meta:: + :description: This chapter will present CUDA driver API porting and showcase equivalent operations in HIP. + :keywords: AMD, ROCm, HIP, CUDA, driver API + +.. _driver_api_reference: + +******************************************************************************* +Driver API reference +******************************************************************************* + +This page presents a list of driver API functions supported by HIP. + +For comparison, compatibility, and version information with the CUDA driver API, visit :doc:`hipify:index`. + +HIP driver API +============== + +Context +------- + +.. doxygengroup:: Context + :content-only: + +Module +------ + +.. doxygengroup:: Module + :content-only: + +Driver Entry Point Access +------------------------- + +.. doxygenfunction:: hipGetProcAddress diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index 9b942953e0..45360b5d71 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -85,6 +85,7 @@ subtrees: title: C++ language extensions - file: reference/cpp_language_support title: C++ language support + - file: reference/driver_api_reference - file: reference/math_api - file: reference/terms title: Comparing syntax for different APIs