-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Playing around with grid persistence. #1779
[WIP] Playing around with grid persistence. #1779
Commits on Jun 26, 2022
-
Refactor TransormPropagator to allow specifying a position and propag…
…ating to part of the DAG (#1775) `MaxInfoPropagator` is renamed to `MaxInfoSpanningTree`, it now only does path-finding, and the propagation is in a separate class `MaxInfoSpanningTree::Propagator`. Same for `MaxRootDomainInfoPropagator`. `MaxInfoSpanningTree` and `MaxRootDomainInfoSpanningTree` now allow specifying a selector, which controls which subgraph should be included in path-finding. `MaxRootDomainInfoSpanningTree` also gets a few new constructors for convenience to use. `TransormPropagator` is now a subclass of `MaxInfoSpanningTree::Propagator`, so the way to use it has changed. Now `MaxInfoSpanningTree` and `MaxRootDomainInfoSpanningTree` will store the path after generation so that the same path can be traversed multiple times. This will be useful to support use cases like new `computeAt`. Pseudo-code: ```C++ void TensorView::computeAt(TensorView tv, int pos) { auto ComputeAtSubgraphSelector selector(this, tv); MaxRootDomainInfoSpanningTree path(tv, pos, &selector); TransformPropagator propagator(tv, pos); path.traverse(&propagator); ComputeAtPosPropagator ca_propagator(tv, pos); path.traverse(&ca_propagator); } ```
Configuration menu - View commit details
-
Copy full SHA for a054b3e - Browse repository at this point
Copy the full SHA a054b3eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 1d3fd15 - Browse repository at this point
Copy the full SHA 1d3fd15View commit details
Commits on Jun 27, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 44b3183 - Browse repository at this point
Copy the full SHA 44b3183View commit details -
Extend mma dimension and layout checking to support strided batched m…
…atmul and tensor contractions (#1761) Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
Configuration menu - View commit details
-
Copy full SHA for ecc7a87 - Browse repository at this point
Copy the full SHA ecc7a87View commit details -
Configuration menu - View commit details
-
Copy full SHA for d3de227 - Browse repository at this point
Copy the full SHA d3de227View commit details
Commits on Jun 28, 2022
-
Fix div(Val, TensorView) (#1778)
* Fix div(scalar, tensor) * lintrunner: clang-format
Configuration menu - View commit details
-
Copy full SHA for 86f46aa - Browse repository at this point
Copy the full SHA 86f46aaView commit details -
Adding sibling path for MaxInfoSpanningTree (#1776)
The sibling path is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector. For example, when the producer of a Welford is excluded from the propagation section. See test `FusionTransformPropagateSelectorSibling_CUDA` for a detailed example. Besides, since we know that siblings should be transformed exactly the same, the sibling path is a perfect next hop for preserving information. If you want a spanning tree without a sibling path, you can override `allowSibling` as `return false` in your selector;
Configuration menu - View commit details
-
Copy full SHA for 33a824d - Browse repository at this point
Copy the full SHA 33a824dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 15488be - Browse repository at this point
Copy the full SHA 15488beView commit details
Commits on Jun 30, 2022
-
Disable register reuse across serial broadcast ops (#1787)
Disable memory aliasing for inner sharing across serial broadcast.
Configuration menu - View commit details
-
Copy full SHA for 0c82ecf - Browse repository at this point
Copy the full SHA 0c82ecfView commit details -
Configuration menu - View commit details
-
Copy full SHA for ebf23a5 - Browse repository at this point
Copy the full SHA ebf23a5View commit details -
Transform propagator skip replay when possible (#1782)
This comment in the code describes what this PR is doing: ```C++ // Note: [Using multiple TransformPropagators] // There are cases that we use multiple TransformPropagators along different // spanning trees with different references in the same fusion. Some of these // spanning trees could overlap. In cases when there are overlapping nodes, // TransformPropagator needs to respect the replay of others, because the // current TransformPropagator might not contain the most amount of // information on how to do the correct transformation. The logic below tells // TransformPropagator to skip the replay when not necessary. ```
Configuration menu - View commit details
-
Copy full SHA for fe93bf5 - Browse repository at this point
Copy the full SHA fe93bf5View commit details -
Caching strides along with sizes. This is to support current expand, which introduces non-contiguous output tensor
Configuration menu - View commit details
-
Copy full SHA for 59f3c32 - Browse repository at this point
Copy the full SHA 59f3c32View commit details
Commits on Jul 1, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 635ebfc - Browse repository at this point
Copy the full SHA 635ebfcView commit details -
New compute at interface (#1743)
Rewrite of the compute at pass to rely on the new propagation mechanisms.
Configuration menu - View commit details
-
Copy full SHA for 28cbaf9 - Browse repository at this point
Copy the full SHA 28cbaf9View commit details -
Configuration menu - View commit details
-
Copy full SHA for 45f5203 - Browse repository at this point
Copy the full SHA 45f5203View commit details -
Some further cleanup for the new computeAt interface (#1793)
Revert MaxProducerPosUpdater to old algo.
Configuration menu - View commit details
-
Copy full SHA for d0d0908 - Browse repository at this point
Copy the full SHA d0d0908View commit details -
Configuration menu - View commit details
-
Copy full SHA for c077085 - Browse repository at this point
Copy the full SHA c077085View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3f2c263 - Browse repository at this point
Copy the full SHA 3f2c263View commit details -
InlinePropagator please don't replay (#1797)
This PR makes `InlinePropagator` just set compute-at positions. It will not replay any tensor. If you want to replay, please use `TransformPropagator` and friends to do so. Currently, `InlinePropagator` is already asserting no replay for standard and best effort compute at. So this PR is mostly about making most inlined compute at works as well. This PR also does a lot of cleanups to remove the word "replay" from comments and variable and function names from `InlinePropagator`. I also cleaned up `recordReplayedPos` and `retrieveReplayedPos`, now the logic is much easier to understand.
Configuration menu - View commit details
-
Copy full SHA for 38c7f3c - Browse repository at this point
Copy the full SHA 38c7f3cView commit details -
Per offline discussion with @csarofeen, this PR does many renaming for better coding style: For all propagation-related things, I am now using the names `P2C` and `C2P` instead of `CasP` and `PasC`. Because "A as B" somewhat implies we want to replay A the same as B, but "B to A" sounds more general and is a better word for this case. Also, I modified the order of function arguments to match the order in its name. For example `PasC` should have `(producer, consumer)` or `(to, from)`, but not `(consumer, producer)` or `(from, to)`, and `C2P` should have `(consumer, producer)` or `(from, to)`, but not `(producer, consumer)` or `(to, from)`.
Configuration menu - View commit details
-
Copy full SHA for ef04f6c - Browse repository at this point
Copy the full SHA ef04f6cView commit details
Commits on Jul 2, 2022
-
Add parsing support for
_to_copy
to handle AMP casts. (#1756)1. Add support for _to_copy() to support AMP casts. 2. refactored cast, accept none for dtype 3. python tests Co-authored-by: jjsjann123 <jiej@nvidia.com>
Configuration menu - View commit details
-
Copy full SHA for 76b3cca - Browse repository at this point
Copy the full SHA 76b3ccaView commit details -
Configuration menu - View commit details
-
Copy full SHA for f008140 - Browse repository at this point
Copy the full SHA f008140View commit details -
Indexing refactor stage 2 : Remove reference tensor in predicate inde…
…xing logic (#1784) Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
Configuration menu - View commit details
-
Copy full SHA for 8d384da - Browse repository at this point
Copy the full SHA 8d384daView commit details
Commits on Jul 5, 2022
-
More cleanup on InlinePropagator (#1800)
I just realized that `InlinePropagator` can be further simplified because it no longer replays. Since `InlinePropagator` is no longer doing replay, it is more like a "for each" problem rather than a propagation problem: For each tensor `tv`, if we already know what is the max position of `tv` that is mapped to the reference tensor's selected outer dimensions(stored in `mapped_reference_pos_` in the code), setting the CA position is a very local operation, and is as simple as checking `tv` itself and all its consumers to determine the inline position. `InlinePropagator` is not completely a "for each" problem only because the computation of `mapped_reference_pos_` is a propagation problem. This cleanup reorganizes the code of `InlinePropagator` so it is clear that `InlinePropagator` is nothing but a two-step process: Step 1: Do a propagation to find the `mapped_reference_pos_` for all tensors. Step 2: For each tensor, check itself and its consumers to determine the CA position. Conceptually, I would like to split step 1 with step 2. Because this split makes these concepts decoupled. Especially, this PR makes `mapped_reference_pos_` only contain info about the reference tensor, and is independent of the CA position (Currently, this is not true for best effort and most inlined computeAt without this PR). Now, in my view, `InlinePropagator` is conceptually very simple and easy to understand. In terms of implementation, step 1 and step 2 can be interleaved, because when we don't need to know the `mapped_reference_pos_` for `tv`'s consumer in order to compute the CA position of `tv`. So a one-pass traverse could do both step 1 and step 2 altogether.
Configuration menu - View commit details
-
Copy full SHA for 5f375d0 - Browse repository at this point
Copy the full SHA 5f375d0View commit details -
Configuration menu - View commit details
-
Copy full SHA for 37c579e - Browse repository at this point
Copy the full SHA 37c579eView commit details -
Grouping grid allreduces across iterations (#1755)
* Extend the grouped grid reduction kernel The kernel itself should work with an arbitrary number of inputs, but the underlying data structure, Tuple, still explicitly needs to be specialized for the number of values, which is currently limited to 8.
Configuration menu - View commit details
-
Copy full SHA for 025c840 - Browse repository at this point
Copy the full SHA 025c840View commit details
Commits on Jul 6, 2022
-
Configuration menu - View commit details
-
Copy full SHA for fa4e6a4 - Browse repository at this point
Copy the full SHA fa4e6a4View commit details -
Configuration menu - View commit details
-
Copy full SHA for fd4be12 - Browse repository at this point
Copy the full SHA fd4be12View commit details -
Broadcast in dim with expand (#1794)
Fixes #1788 Added expand in broadcast_in_dim to support expanding to concrete size. Note that we are not supporting dynamic shape for concrete size at this moment.
Configuration menu - View commit details
-
Copy full SHA for 3ba6a5f - Browse repository at this point
Copy the full SHA 3ba6a5fView commit details
Commits on Jul 7, 2022
-
TORCH_WARN on nvrtc debug option impacting performance.
Configuration menu - View commit details
-
Copy full SHA for 282c429 - Browse repository at this point
Copy the full SHA 282c429View commit details -
Merge branch 'devel' of https://www.github.com/csarofeen/pytorch into…
… grid_persistent_normalization_cs
Configuration menu - View commit details
-
Copy full SHA for e594590 - Browse repository at this point
Copy the full SHA e594590View commit details