-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Playing around with grid persistence. #1779
[WIP] Playing around with grid persistence. #1779
Conversation
…ating to part of the DAG (#1775) `MaxInfoPropagator` is renamed to `MaxInfoSpanningTree`, it now only does path-finding, and the propagation is in a separate class `MaxInfoSpanningTree::Propagator`. Same for `MaxRootDomainInfoPropagator`. `MaxInfoSpanningTree` and `MaxRootDomainInfoSpanningTree` now allow specifying a selector, which controls which subgraph should be included in path-finding. `MaxRootDomainInfoSpanningTree` also gets a few new constructors for convenience to use. `TransormPropagator` is now a subclass of `MaxInfoSpanningTree::Propagator`, so the way to use it has changed. Now `MaxInfoSpanningTree` and `MaxRootDomainInfoSpanningTree` will store the path after generation so that the same path can be traversed multiple times. This will be useful to support use cases like new `computeAt`. Pseudo-code: ```C++ void TensorView::computeAt(TensorView tv, int pos) { auto ComputeAtSubgraphSelector selector(this, tv); MaxRootDomainInfoSpanningTree path(tv, pos, &selector); TransformPropagator propagator(tv, pos); path.traverse(&propagator); ComputeAtPosPropagator ca_propagator(tv, pos); path.traverse(&ca_propagator); } ```
I'm getting 142GB/s on this size right now, which is underwhelming, but is better than where it started at. |
Thanks! I saw performance regressions when disabled welfrod translations. It seemed it was actually better not to translate for outer-reduction cases, but for inner-reduction batch norms, I saw up to 20% regressions. I'll report it back after doublechecking what I saw. |
…atmul and tensor contractions (#1761) Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
// ->Args({128, 512, 14}) | ||
// ->Args({128, 1024, 14}) | ||
// ->Args({128, 1024, 7}) | ||
->Args({8, 4096, 64}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: Remove before considering to merge.
@@ -1963,6 +1963,11 @@ class CudaKernelGenerator : private OptOutConstDispatch { | |||
const auto reduction_name = | |||
genFusedReductionName(alloc_fused_reduction->out()->view()); | |||
|
|||
if (!alloced_reduction_classes.emplace(alloc_fused_reduction->out()->view()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With welford, in an unswitch section there was multiple definitions being created because the grid reduce was being used in both the then/else section of the code. We should go fix the actual insertion of the allocations, instead of WAR-ing the codegen.
@@ -411,7 +411,8 @@ class CombineReductions; | |||
|
|||
//! Options to configure/debug candidate finder | |||
struct TORCH_CUDA_CU_API SegmentCandidateFinderOptions { | |||
bool run_translate_welford = true; | |||
// Temporarily disable by default for now. May want to revisit. | |||
bool run_translate_welford = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to use welford for grid persistence, forcing translation off everywhere for now.
@@ -171,15 +171,6 @@ ReductionParams innerPersistentHeuristic( | |||
const int64_t max_multi_reduction_factor = scheduler_utils::safeDiv( | |||
scheduler_utils::register_file_size, max_persistent_buffer_size); | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing inner persistent changes for now.
@@ -483,45 +432,25 @@ ReductionParams innerPersistentHeuristic( | |||
} | |||
|
|||
if (godim > 1) { | |||
if (rparams.cross_grid_inner_reduction) { | |||
rparams.grid_dim_iter_dom = ParallelType::BIDy; | |||
rparams.grid_dim_iter_dom = ParallelType::BIDx; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should have kept this change instead of reverting. Can we isolate this into a stand alone PR as it seems to be a fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the grid persistence is revered, doesn't this change make any difference? Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to just remove the changes unrelated to outer persistent reductions since that's what we really need quickly.
@@ -593,7 +524,7 @@ ReductionParams outerPersistentHeuristic( | |||
(int64_t)16 / (int64_t)max_input_dtype_size, | |||
// Reduce unrolling if we have many inputs, start reduction at 4 inputs | |||
scheduler_utils::lastPow2( | |||
scheduler_utils::safeDiv((int64_t)n_tensor_inputs, 2))); | |||
scheduler_utils::safeDiv((int64_t)n_tensor_inputs, 4))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<<2
is the equivalent of dividing by 4
not 2
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😳
num_warp_per_partition) { | ||
bdimy++; | ||
} | ||
// // Increase bdimy until we hit the SM partition granularity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed this, only because I didn't understand what you're trying to do here. On second thought, I assume you're just trying to avoid going across warp boundaries that would allocate more registers, but uncertain why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree we probably want to consider these boundaries, but we're already limiting the register file quite significantly (more so than this could inflate the registers by).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just copied over from your original PR. I thought that it was meant to increase the number of threads within the limit of the 32 * #partitions
granularity as doing so would have no actual impact on resource usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's probably right, we can revisit after we get a bit more out of the current available perf.
std::cout << "7: " << bdimx << ", " << bdimy << std::endl; | ||
// Disable for now, it can go from taking half a warp, to reducing it to a | ||
// quarter warp. On cases where we're dealing with fp16 values and we're not | ||
// unrolled, this would mean loads are only 8xfp16 words strided across tidy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ^^
// Set gdimy. | ||
if (!block_persist) { | ||
gdimy = sm_required_per_norm_set; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to set gdimy before trying to figure out the size of gdimx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thanks for the fix.
* Fix div(scalar, tensor) * lintrunner: clang-format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR.
@@ -1243,30 +1237,39 @@ class PersistentKernelScheduler : public SchedulerEntry { | |||
const int64_t required_sm_per_norm = | |||
ceilDiv(persistent_buffer_size, scheduler_utils::register_file_size); | |||
|
|||
// If the persistence requires over half the device don't do grid | |||
// persistence as we can't overlap the grid comms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is overlapped with the grid comms?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just different groups of blocks running at the same time. I just don't think we want all SM's participating in the same grid persistent allreduce
@@ -483,45 +432,25 @@ ReductionParams innerPersistentHeuristic( | |||
} | |||
|
|||
if (godim > 1) { | |||
if (rparams.cross_grid_inner_reduction) { | |||
rparams.grid_dim_iter_dom = ParallelType::BIDy; | |||
rparams.grid_dim_iter_dom = ParallelType::BIDx; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the grid persistence is revered, doesn't this change make any difference? Am I missing something?
@@ -593,7 +524,7 @@ ReductionParams outerPersistentHeuristic( | |||
(int64_t)16 / (int64_t)max_input_dtype_size, | |||
// Reduce unrolling if we have many inputs, start reduction at 4 inputs | |||
scheduler_utils::lastPow2( | |||
scheduler_utils::safeDiv((int64_t)n_tensor_inputs, 2))); | |||
scheduler_utils::safeDiv((int64_t)n_tensor_inputs, 4))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😳
num_warp_per_partition) { | ||
bdimy++; | ||
} | ||
// // Increase bdimy until we hit the SM partition granularity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just copied over from your original PR. I thought that it was meant to increase the number of threads within the limit of the 32 * #partitions
granularity as doing so would have no actual impact on resource usage.
// Set gdimy. | ||
if (!block_persist) { | ||
gdimy = sm_required_per_norm_set; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thanks for the fix.
Do you know what is the bandwidth with the non-persistent scheduler? |
The sibling path is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector. For example, when the producer of a Welford is excluded from the propagation section. See test `FusionTransformPropagateSelectorSibling_CUDA` for a detailed example. Besides, since we know that siblings should be transformed exactly the same, the sibling path is a perfect next hop for preserving information. If you want a spanning tree without a sibling path, you can override `allowSibling` as `return false` in your selector;
|
||
bool vectorize = false; | ||
|
||
std::cout << "4: " << bdimx << ", " << bdimy << std::endl; | ||
// Move unrolling factor into vectorization upto vectorization limit. | ||
if (vectorize_factor > 1 && iter_unroll_factor > 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block needs to be protected because it's setting iter_unroll_factor
but not checking max_multi_reduction_factor
. We need to change this block to:
if (vectorize_factor > 1 && iter_unroll_factor > 1) {
vectorize = true;
iter_unroll_factor = std::min(
scheduler_utils::lastPow2(iter_unroll_factor),
(int64_t)vectorize_factor);
iter_unroll_factor = std::min(
iter_unroll_factor,
scheduler_utils::safeDiv(max_multi_reduction_factor, bdimx));
}
For some reason I seem to not have a response block to your inline comments:
Just ignore any changes I made to inner persistent for now. I was just trying to isolate out cleanup from heuristic changes. Just focus on the outer persistent stuff, and you can push fixes to inner persistent in another PR.
Yeah I think you're right, let's revisit this later to see if it's actually valuable.
Let me pull it. |
With grid persistence: Without: |
there's definitely some register usage issues. I think we need to check on |
Disable memory aliasing for inner sharing across serial broadcast.
This comment in the code describes what this PR is doing: ```C++ // Note: [Using multiple TransformPropagators] // There are cases that we use multiple TransformPropagators along different // spanning trees with different references in the same fusion. Some of these // spanning trees could overlap. In cases when there are overlapping nodes, // TransformPropagator needs to respect the replay of others, because the // current TransformPropagator might not contain the most amount of // information on how to do the correct transformation. The logic below tells // TransformPropagator to skip the replay when not necessary. ```
Caching strides along with sizes. This is to support current expand, which introduces non-contiguous output tensor
Rewrite of the compute at pass to rely on the new propagation mechanisms.
Revert MaxProducerPosUpdater to old algo.
This PR makes `InlinePropagator` just set compute-at positions. It will not replay any tensor. If you want to replay, please use `TransformPropagator` and friends to do so. Currently, `InlinePropagator` is already asserting no replay for standard and best effort compute at. So this PR is mostly about making most inlined compute at works as well. This PR also does a lot of cleanups to remove the word "replay" from comments and variable and function names from `InlinePropagator`. I also cleaned up `recordReplayedPos` and `retrieveReplayedPos`, now the logic is much easier to understand.
Per offline discussion with @csarofeen, this PR does many renaming for better coding style: For all propagation-related things, I am now using the names `P2C` and `C2P` instead of `CasP` and `PasC`. Because "A as B" somewhat implies we want to replay A the same as B, but "B to A" sounds more general and is a better word for this case. Also, I modified the order of function arguments to match the order in its name. For example `PasC` should have `(producer, consumer)` or `(to, from)`, but not `(consumer, producer)` or `(from, to)`, and `C2P` should have `(consumer, producer)` or `(from, to)`, but not `(producer, consumer)` or `(to, from)`.
1. Add support for _to_copy() to support AMP casts. 2. refactored cast, accept none for dtype 3. python tests Co-authored-by: jjsjann123 <jiej@nvidia.com>
…xing logic (#1784) Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
I just realized that `InlinePropagator` can be further simplified because it no longer replays. Since `InlinePropagator` is no longer doing replay, it is more like a "for each" problem rather than a propagation problem: For each tensor `tv`, if we already know what is the max position of `tv` that is mapped to the reference tensor's selected outer dimensions(stored in `mapped_reference_pos_` in the code), setting the CA position is a very local operation, and is as simple as checking `tv` itself and all its consumers to determine the inline position. `InlinePropagator` is not completely a "for each" problem only because the computation of `mapped_reference_pos_` is a propagation problem. This cleanup reorganizes the code of `InlinePropagator` so it is clear that `InlinePropagator` is nothing but a two-step process: Step 1: Do a propagation to find the `mapped_reference_pos_` for all tensors. Step 2: For each tensor, check itself and its consumers to determine the CA position. Conceptually, I would like to split step 1 with step 2. Because this split makes these concepts decoupled. Especially, this PR makes `mapped_reference_pos_` only contain info about the reference tensor, and is independent of the CA position (Currently, this is not true for best effort and most inlined computeAt without this PR). Now, in my view, `InlinePropagator` is conceptually very simple and easy to understand. In terms of implementation, step 1 and step 2 can be interleaved, because when we don't need to know the `mapped_reference_pos_` for `tv`'s consumer in order to compute the CA position of `tv`. So a one-pass traverse could do both step 1 and step 2 altogether.
* Extend the grouped grid reduction kernel The kernel itself should work with an arbitrary number of inputs, but the underlying data structure, Tuple, still explicitly needs to be specialized for the number of values, which is currently limited to 8.
Fixes #1788 Added expand in broadcast_in_dim to support expanding to concrete size. Note that we are not supporting dynamic shape for concrete size at this moment.
TORCH_WARN on nvrtc debug option impacting performance.
… grid_persistent_normalization_cs
Mainly wanted to confirm torchrun works fine with dynamo/ddp, but it is also a better system than manually launching processes. Partially addresses issue #1779 New run commands ------------ single process: python benchmarks/dynamo/distributed.py [args] multi-gpu (e.g. 2 gpu on one host): torchrun --nproc_per_node 2 benchmarks/dynamo/distributed.py [args] Pull Request resolved: pytorch#89149 Approved by: https://github.com/aazzolini
Made some changes to the grid persistent branch trying to get perf starting with an arbitrary size I would hope would be straight forward to get perf on (not really certain it is). I hardcoded this size under the benchmark:
NvFuserScheduler_ResNext_BatchNorm_nhwc_fp16
.