Skip to content

Commit

Permalink
[NVFuser] Upstream push 1026 (pytorch#87779)
Browse files Browse the repository at this point in the history
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Codegen changes include:

* codegen improvement:
    i. allow non-root trivial reductions, allow empty/no-op fusion
    ii. fixes vectorization checks and size calculation
    iii. bank conflict handle improvement
    iv. enables transpose scheduler

* misc:
    i. CI tests failure fixes
    ii. cpp tests file clean up
    iii. trivial forwarding supports added in codegen runtime
    iv. added factory methods support in codegen

Commits that's in this PR from the devel branch:

```
7117a7e patching nvfuser conv cudnn test numerics mismatch (#2048)
65af1a4 Inserting sync for redundant parallel types is already done at the (#2023)
6ac74d1 Fix sync map (#2047)
f5bca33 Bank conflict checker improvements (#2032)
d2ca7e3 Minor update on cp.async code generation. (#1901)
d36cf61 Test file cleanup (#2040)
0b8e83f Allow non-root trivial reductions (#2037)
a2dfe40 Fix vectorize size calculation (#2035)
e040676 Use withPredicate to replace setPredicate to maintain Exprs immutable (#2025)
197221b removing ci workflow (#2034)
40e2703 Reduction rand like patch (#2031)
bc77266 Add utility for checking bank conflict of shared memory (#2029)
ddd1cf7 Add back FusionReductionWithTrivialReduction_CUDA (#2030)
fbd97e5 Revert "Cleanup trivial reduction workarounds (#2006)" (#2024)
bca20c1 Cleanup trivial reduction workarounds (#2006)
e4b6585 Trivial forwarding (#1995)
1a0e355 Fix contiguity analysis of predicates to match updated contiguity. (#1991)
a4effa6 Enable output allocation cache (#2010)
35440b7 Patching bn inference (#2016)
0f9f0b4 Add matmul benchmark (#2007)
45045cd Enable tests previously disabled due to an aliasing bug (#2005)
967aa77 Contiguous indexing for View operations (#1990)
a43cb20 Make inlining even more modular (#2004)
dc45835 Test util cleanup (#2003)
3ca21eb More strict validation (#2000)
a7a7d57 Fix build problem (#1999)
fc235b0 Just fixes comments (#1998)
482386c cleanup (#1997)
4cbe0db Improve divisible split detection (#1970)
42ccc52 Minor build fix. (#1996)
fcf8c09 Cleanup of lower_utils.cpp: Isolate out GpuLower usage (#1989)
15f2f6d Move ConcretizedBroadcastDomains to shared_ptr in GpuLower. (#1988)
8f1c7f5 Minor cleanup lower_unroll.cpp (#1994)
1d9858c Minor cleanup (#1992)
f262d9c Add support for uniform RNG (#1986)
eb1dad1 Remove non-const functions, remove GpuLower instance on build, pass in ca_map. (#1987)
634820c Add support for some empty fusion (#1981)
eabe8d8 Segment self mapping fusions (#1954)
e96aacf Enable Transpose operation (#1882)
425dce2 Add a null scheduler that helps segmenting away no-op schedules (#1835)
306d4a6 Fix canScheduleCompileTime check of transpose scheduler (#1969)
b1bd32c Minor fix (#1967)
bd93578 Enable transpose scheduler (#1927)
b7a206e Move scheduler vectorize utilities into their own file (#1959)
d9420e4 View scheduling (#1928)
c668e13 Upstream push ci fixes (#1965)
c40202b Fix dump effective bandwidth (#1962)
93505bc WAR on index mapping when exact and permissive maps differ (#1960)
45e95fd Allow splitting inner-most ID to create virtual innermost ID in transpose scheduler (#1930)
a3ecb33 Improve the comments at the beginning of index_compute.h (#1946)
f7bc341 Remove unused variables (#1955)
df3393a Some cleanup (#1957)
7d1d7c8 TVDomainGuard factory (#1953)
357ba22 Fill allocation with nan on tests (#1956)
8eafc54 Fix detection of unmappable root domains (#1952)
90a51f2 Some indexing cleanups, Add eye support (#1940)
ddc01e4 Exclude unsupported data types (#1951)
992e17c test the groups the same order as they are merged (#1949)
208262b Move detection of self mapping IDs to IterDomainGraph from (#1941)
ac4de38 Merge pull request #1945 from csarofeen/master_merge_0828
6310948 Add full, full_like, zeros, zeros_like, ones, ones_like (#1943)
aab10bc Merge remote-tracking branch 'upstream/viable/strict' into HEAD
4c254c0 Fix arange when step is negative (#1942)
89330aa Tensor factories must set the output shape as its input (#1939)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D40869846](https://our.internmc.facebook.com/intern/diff/D40869846)
Pull Request resolved: pytorch#87779
Approved by: https://github.com/davidberard98
  • Loading branch information
jjsjann123 authored and pytorchmergebot committed Nov 4, 2022
1 parent 15e5429 commit 7b419e8
Show file tree
Hide file tree
Showing 152 changed files with 35,136 additions and 28,328 deletions.
3 changes: 3 additions & 0 deletions aten/src/ATen/core/interned_strings.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,11 @@ namespace c10 {
_(prim, FunctionalGraph) \
_(prim, add_optional) \
_(prim, view_copy) \
_(prim, permute_copy) \
_(prim, reshape_copy) \
_(prim, squeeze_copy) \
_(prim, t_copy) \
_(prim, transpose_copy) \
_(prim, unsqueeze_copy) \
_(prim, flatten_copy) \
_(prim, expand_copy) \
Expand Down
1 change: 1 addition & 0 deletions benchmarks/cpp/nvfuser/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ if(USE_CUDA)
softmax_backward.cpp
scale_bias_relu.cpp
transpose.cpp
matmul.cpp
timm.cpp
utils.cpp
main.cpp)
Expand Down
4 changes: 0 additions & 4 deletions benchmarks/cpp/nvfuser/batch_norm_channels_first.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,6 @@ static void NvFuserScheduler_BatchNorm(
DataType dtype) {
TORCH_INTERNAL_ASSERT(dtype == DataType::Float || dtype == DataType::Half);

const bool kTraining = true;
const float kMomentum = 0.1;
const float kEps = 1e-5;

std::vector<int64_t> input_shape{
benchmark_state.range(0),
benchmark_state.range(1),
Expand Down
4 changes: 0 additions & 4 deletions benchmarks/cpp/nvfuser/batch_norm_channels_first_backward.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ static void setupBatchNorm_BWD(Fusion* fusion, DataType dtype) {
FusionGuard fg(fusion);

const bool kTraining = true;
const float kMomentum = 0.1;
const float kEps = 1e-5;

// setup fusion
Expand Down Expand Up @@ -85,9 +84,6 @@ static void NvFuserScheduler_BatchNorm_BWD(
DataType dtype) {
TORCH_INTERNAL_ASSERT(dtype == DataType::Float || dtype == DataType::Half);

const bool kTraining = true;
const float kEps = 1e-5;

std::vector<int64_t> input_shape{
benchmark_state.range(0),
benchmark_state.range(1),
Expand Down
4 changes: 0 additions & 4 deletions benchmarks/cpp/nvfuser/batch_norm_channels_last.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,6 @@ static void NvFuserScheduler_BatchNorm_nhwc(
DataType dtype) {
TORCH_INTERNAL_ASSERT(dtype == DataType::Float || dtype == DataType::Half);

const bool kTraining = true;
const float kMomentum = 0.1;
const float kEps = 1e-5;

std::vector<int64_t> input_shape{
benchmark_state.range(0),
benchmark_state.range(2),
Expand Down
4 changes: 0 additions & 4 deletions benchmarks/cpp/nvfuser/batch_norm_channels_last_backward.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ static void setupBatchNorm_nhwc_BWD(Fusion* fusion, DataType dtype) {
FusionGuard fg(fusion);

const bool kTraining = true;
const float kMomentum = 0.1;
const float kEps = 1e-5;

// setup fusion
Expand Down Expand Up @@ -86,9 +85,6 @@ static void NvFuserScheduler_BatchNorm_nhwc_BWD(
DataType dtype) {
TORCH_INTERNAL_ASSERT(dtype == DataType::Float || dtype == DataType::Half);

const bool kTraining = true;
const float kEps = 1e-5;

std::vector<int64_t> input_shape{
benchmark_state.range(0),
benchmark_state.range(2),
Expand Down
3 changes: 0 additions & 3 deletions benchmarks/cpp/nvfuser/gelu_backward.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,6 @@ BENCHMARK(GeluBackward_AutoSchedule)->Unit(benchmark::kMicrosecond);
//------------------------------------------------------------------------------

static void GeluBackward_Lower(benchmark::State& benchmark_state) {
constexpr int kHiddenFeatures = 512;
constexpr int kBatchSize = 64;

Fusion fusion;

// setup fusion
Expand Down
2 changes: 0 additions & 2 deletions benchmarks/cpp/nvfuser/layer_norm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ static void setupLayerNorm(Fusion* fusion, DataType dtype) {

FusionGuard fg(fusion);

const int kReductionAxis = 1;
const float kEps = 1e-5;

Double* eps_ptr = IrBuilder::create<Double>(kEps);
Expand Down Expand Up @@ -61,7 +60,6 @@ static void NvFuserScheduler_LayerNorm(

std::vector<int64_t> input_shape{
benchmark_state.range(0), benchmark_state.range(1)};
const float kEps = 1e-5;

// inputs
at::manual_seed(0);
Expand Down
3 changes: 0 additions & 3 deletions benchmarks/cpp/nvfuser/layer_norm_backward.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,6 @@ static void setupLayerNorm_BWD(Fusion* fusion, DataType dtype) {

TORCH_INTERNAL_ASSERT(dtype == DataType::Float || dtype == DataType::Half);

const int kReductionAxis = 1;
Double* eps_ptr = IrBuilder::create<Double>(1e-5);

// setup fusion
auto grad_out = makeContigTensor(2, dtype);
auto input = makeContigTensor(2, dtype);
Expand Down
Loading

0 comments on commit 7b419e8

Please sign in to comment.