Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Playing around with grid persistence. #1779

Merged
merged 31 commits into from
Jul 11, 2022

Commits on Jun 26, 2022

  1. Refactor TransormPropagator to allow specifying a position and propag…

    …ating to part of the DAG (#1775)
    
    `MaxInfoPropagator` is renamed to `MaxInfoSpanningTree`, it now only does path-finding, and the propagation is in a separate class `MaxInfoSpanningTree::Propagator`. Same for `MaxRootDomainInfoPropagator`.
    
    `MaxInfoSpanningTree` and `MaxRootDomainInfoSpanningTree`  now allow specifying a selector, which controls which subgraph should be included in path-finding.
    
    `MaxRootDomainInfoSpanningTree` also gets a few new constructors for convenience to use.
    
    `TransormPropagator` is now a subclass of `MaxInfoSpanningTree::Propagator`, so the way to use it has changed.
    
    Now `MaxInfoSpanningTree` and `MaxRootDomainInfoSpanningTree` will store the path after generation so that the same path can be traversed multiple times. This will be useful to support use cases like new `computeAt`. Pseudo-code:
    ```C++
    void TensorView::computeAt(TensorView tv, int pos) {
      auto ComputeAtSubgraphSelector selector(this, tv);
      MaxRootDomainInfoSpanningTree path(tv, pos, &selector);
      TransformPropagator propagator(tv, pos);
      path.traverse(&propagator);
      ComputeAtPosPropagator ca_propagator(tv, pos);
      path.traverse(&ca_propagator);
    }
    ```
    zasdfgbnm committed Jun 26, 2022
    Configuration menu
    Copy the full SHA
    a054b3e View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    1d3fd15 View commit details
    Browse the repository at this point in the history

Commits on Jun 27, 2022

  1. Configuration menu
    Copy the full SHA
    44b3183 View commit details
    Browse the repository at this point in the history
  2. Extend mma dimension and layout checking to support strided batched m…

    …atmul and tensor contractions (#1761)
    
    Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
    shmsong and csarofeen committed Jun 27, 2022
    Configuration menu
    Copy the full SHA
    ecc7a87 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    d3de227 View commit details
    Browse the repository at this point in the history

Commits on Jun 28, 2022

  1. Fix div(Val, TensorView) (#1778)

    * Fix div(scalar, tensor)
    
    * lintrunner: clang-format
    IvanYashchuk committed Jun 28, 2022
    Configuration menu
    Copy the full SHA
    86f46aa View commit details
    Browse the repository at this point in the history
  2. Adding sibling path for MaxInfoSpanningTree (#1776)

    The sibling path is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector. For example, when the producer of a Welford is excluded from the propagation section. See test `FusionTransformPropagateSelectorSibling_CUDA` for a detailed example. Besides, since we know that siblings should be transformed exactly the same, the sibling path is a perfect next hop for preserving information.
    
    If you want a spanning tree without a sibling path, you can override `allowSibling` as `return false` in your selector;
    zasdfgbnm committed Jun 28, 2022
    Configuration menu
    Copy the full SHA
    33a824d View commit details
    Browse the repository at this point in the history
  3. Save.

    csarofeen committed Jun 28, 2022
    Configuration menu
    Copy the full SHA
    15488be View commit details
    Browse the repository at this point in the history

Commits on Jun 30, 2022

  1. Disable register reuse across serial broadcast ops (#1787)

    Disable memory aliasing for inner sharing across serial broadcast.
    shmsong committed Jun 30, 2022
    Configuration menu
    Copy the full SHA
    0c82ecf View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    ebf23a5 View commit details
    Browse the repository at this point in the history
  3. Transform propagator skip replay when possible (#1782)

    This comment in the code describes what this PR is doing:
    
    ```C++
      // Note: [Using multiple TransformPropagators]
      // There are cases that we use multiple TransformPropagators along different
      // spanning trees with different references in the same fusion. Some of these
      // spanning trees could overlap. In cases when there are overlapping nodes,
      // TransformPropagator needs to respect the replay of others, because the
      // current TransformPropagator might not contain the most amount of
      // information on how to do the correct transformation. The logic below tells
      // TransformPropagator to skip the replay when not necessary.
    ```
    zasdfgbnm committed Jun 30, 2022
    Configuration menu
    Copy the full SHA
    fe93bf5 View commit details
    Browse the repository at this point in the history
  4. Output allocate patch (#1790)

    Caching strides along with sizes. This is to support current expand, which introduces non-contiguous output tensor
    jjsjann123 committed Jun 30, 2022
    Configuration menu
    Copy the full SHA
    59f3c32 View commit details
    Browse the repository at this point in the history

Commits on Jul 1, 2022

  1. Configuration menu
    Copy the full SHA
    635ebfc View commit details
    Browse the repository at this point in the history
  2. New compute at interface (#1743)

    Rewrite of the compute at pass to rely on the new propagation mechanisms.
    zasdfgbnm committed Jul 1, 2022
    Configuration menu
    Copy the full SHA
    28cbaf9 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    45f5203 View commit details
    Browse the repository at this point in the history
  4. Some further cleanup for the new computeAt interface (#1793)

    Revert MaxProducerPosUpdater to old algo.
    zasdfgbnm committed Jul 1, 2022
    Configuration menu
    Copy the full SHA
    d0d0908 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    c077085 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    3f2c263 View commit details
    Browse the repository at this point in the history
  7. InlinePropagator please don't replay (#1797)

    This PR makes `InlinePropagator` just set compute-at positions. It will not replay any tensor. If you want to replay, please use `TransformPropagator` and friends to do so.
    
    Currently, `InlinePropagator` is already asserting no replay for standard and best effort compute at. So this PR is mostly about making most inlined compute at works as well.
    
    This PR also does a lot of cleanups to remove the word "replay" from comments and variable and function names from `InlinePropagator`.
    
    I also cleaned up `recordReplayedPos` and `retrieveReplayedPos`, now the logic is much easier to understand.
    zasdfgbnm committed Jul 1, 2022
    Configuration menu
    Copy the full SHA
    38c7f3c View commit details
    Browse the repository at this point in the history
  8. Coding style cleanups (#1798)

    Per offline discussion with @csarofeen, this PR does many renaming for better coding style: For all propagation-related things, I am now using the names `P2C` and `C2P` instead of `CasP` and `PasC`. Because "A as B" somewhat implies we want to replay A the same as B, but "B to A" sounds more general and is a better word for this case. Also, I modified the order of function arguments to match the order in its name. For example `PasC` should have `(producer, consumer)` or `(to, from)`, but not `(consumer, producer)` or `(from, to)`, and `C2P` should have `(consumer, producer)` or `(from, to)`, but not `(producer, consumer)` or `(to, from)`.
    zasdfgbnm committed Jul 1, 2022
    Configuration menu
    Copy the full SHA
    ef04f6c View commit details
    Browse the repository at this point in the history

Commits on Jul 2, 2022

  1. Add parsing support for _to_copy to handle AMP casts. (#1756)

    1. Add support for _to_copy() to support AMP casts.
    2. refactored cast, accept none for dtype
    3. python tests
    
    Co-authored-by: jjsjann123 <jiej@nvidia.com>
    kevinstephano and jjsjann123 committed Jul 2, 2022
    Configuration menu
    Copy the full SHA
    76b3cca View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    f008140 View commit details
    Browse the repository at this point in the history
  3. Indexing refactor stage 2 : Remove reference tensor in predicate inde…

    …xing logic (#1784)
    
    Co-authored-by: Christian Sarofeen <csarofeen@nvidia.com>
    shmsong and csarofeen committed Jul 2, 2022
    Configuration menu
    Copy the full SHA
    8d384da View commit details
    Browse the repository at this point in the history

Commits on Jul 5, 2022

  1. More cleanup on InlinePropagator (#1800)

    I just realized that `InlinePropagator` can be further simplified because it no longer replays.
    
    Since `InlinePropagator` is no longer doing replay, it is more like a "for each" problem rather than a propagation problem:
    
    For each tensor `tv`, if we already know what is the max position of `tv` that is mapped to the reference tensor's selected outer dimensions(stored in `mapped_reference_pos_` in the code), setting the CA position is a very local operation, and is as simple as checking `tv` itself and all its consumers to determine the inline position.
    
    `InlinePropagator` is not completely a "for each" problem only because the computation of `mapped_reference_pos_` is a propagation problem.
    
    This cleanup reorganizes the code of `InlinePropagator` so it is clear that `InlinePropagator` is nothing but a two-step process:
    Step 1: Do a propagation to find the `mapped_reference_pos_` for all tensors.
    Step 2: For each tensor, check itself and its consumers to determine the CA position.
    
    Conceptually, I would like to split step 1 with step 2. Because this split makes these concepts decoupled. Especially, this PR makes `mapped_reference_pos_` only contain info about the reference tensor, and is independent of the CA position (Currently, this is not true for best effort and most inlined computeAt without this PR). Now, in my view, `InlinePropagator` is conceptually very simple and easy to understand.
    
    In terms of implementation, step 1 and step 2 can be interleaved, because when we don't need to know the `mapped_reference_pos_` for `tv`'s consumer in order to compute the CA position of `tv`. So a one-pass traverse could do both step 1 and step 2 altogether.
    zasdfgbnm committed Jul 5, 2022
    Configuration menu
    Copy the full SHA
    5f375d0 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    37c579e View commit details
    Browse the repository at this point in the history
  3. Grouping grid allreduces across iterations (#1755)

    * Extend the grouped grid reduction kernel
    
    The kernel itself should work with an arbitrary number of inputs, but
    the underlying data structure, Tuple, still explicitly needs to be
    specialized for the number of values, which is currently limited to 8.
    naoyam committed Jul 5, 2022
    Configuration menu
    Copy the full SHA
    025c840 View commit details
    Browse the repository at this point in the history

Commits on Jul 6, 2022

  1. Configuration menu
    Copy the full SHA
    fa4e6a4 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    fd4be12 View commit details
    Browse the repository at this point in the history
  3. Broadcast in dim with expand (#1794)

    Fixes #1788
    
    Added expand in broadcast_in_dim to support expanding to concrete size. Note that we are not supporting dynamic shape for concrete size at this moment.
    jjsjann123 committed Jul 6, 2022
    Configuration menu
    Copy the full SHA
    3ba6a5f View commit details
    Browse the repository at this point in the history

Commits on Jul 7, 2022

  1. spam nvrtc options (#1783)

    TORCH_WARN on nvrtc debug option impacting performance.
    jjsjann123 committed Jul 7, 2022
    Configuration menu
    Copy the full SHA
    282c429 View commit details
    Browse the repository at this point in the history
  2. Merge branch 'devel' of https://www.github.com/csarofeen/pytorch into…

    … grid_persistent_normalization_cs
    csarofeen committed Jul 7, 2022
    Configuration menu
    Copy the full SHA
    e594590 View commit details
    Browse the repository at this point in the history