[CINN] Implement the new RearrangeLoadInstruction pass #70258
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR Category
CINN
PR Types
Improvements
Description
Rearrange global memory loads in front of expressions to optimize the instruction pipeline at the assembly level for GPUs.
Model Testing
tested on A100, unit: ips
Introduction
This pass operates on leaf blocks (blocks in the inner-most loops). It first extracts loads from each schedule block in a leaf block, then places these loads at the beginning of the block. By doing so, it overlaps the memory latency of multiple loads, minimizes pipeline stalls, and therefore improves the throughput.
Background
GPU architectures are characterized by deep, in-order execution pipelines. Unlike modern CPUs, which can execute instructions out of order at the hardware level, GPUs follow a strict in-order execution model. Therefore, when a subsequent instruction depends on a previous one that requires a significant amount of time to complete, the pipeline will stall, severely impacting performance.
For example, consider the following assembly code:
In this sequence, instruction
(I2)
depends on the result of(I1)
. If(I1)
is a long-latency load operation, taking a significant amount of time (let's say T0),(I2)
cannot be issued until(I1)
completes. This dependency effectively blocks all succeeding instructions from being dispatched. Moreover,(I3)
cannot be issued until both(I1)
and(I2)
are completed. If(I3)
is also a long-latency load taking the same time, T0, we would spend approximately 2*T0 on this segment of code.However, by observing that
(I2)
and(I3)
are independent of each other, we can rearrange the instructions as follows:In this reordered sequence,
(I1)
and(I3)
can be issued in parallel because they do not have dependencies on each other. If there is sufficient memory bandwidth,(I1)
and(I3)
will complete concurrently in T0, reducing the total execution time to nearly T0!Performance Impact
This pass can enhance performance by up to 20% for both Reduce and Trvial. The improvement is often more pronounced when expressions involve complex ops (e.g. div, exp and rsqrt) and when multiple schedule blocks exist within one leaf block. The performance gain comes from that the NVCC tends to conserve registers and employs a lazy approach to software pipelining. By applying this pass, we force NVCC to use more registers and engage in more aggressive software pipelining.
However, there are also random cases where this pass may decrease performace. The reason is unclear yet (perhaps because of suboptimal unrolling and register overflow). We have used some strategies to avoid these cases, such
as limiting the maximum number of loads to rearrange and forbidding certain patterns. While we cannot currently guarantee a consistent improvement, our experiments indicate that the performance degradation is within 5% in the
worst case.
Limitations
Examples
Note: The reduce var itself (
var_2[k]
) is not rearranged.Note:
var_0
is used twice but only loaded once.Note:
var_1[var_0[k]]
has indirect indices,var_2[k]
only appears in one branch of Select,var_3[k]
in ScheduleBlock(var_4) has data dependency with ScheduleBlock(var_3); none of them can be rearranged.Pcard-85711