Skip to content

Conversation

@yinying-lisa-li
Copy link
Contributor

@yinying-lisa-li yinying-lisa-li commented Sep 14, 2023

CSR:
lvlTypes = [ "dense", "compressed" ] to map = (d0, d1) -> (d0 : dense, d1 : compressed)

CSC:
lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) -> (d1, d0)> to map = (d0, d1) -> (d1 : dense, d0 : compressed)

This is an ongoing effort: #66146

CSR:
lvlTypes = [ "dense", "compressed" ] to map = (d0, d1) -> (d0 : dense, d1 : compressed)

CSC:
lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) -> (d1, d0)> to  map = (d0, d1) -> (d1 : dense, d0 : compressed)
@yinying-lisa-li yinying-lisa-li added the mlir:sparse Sparse compiler in MLIR label Sep 14, 2023
@yinying-lisa-li yinying-lisa-li requested review from a team as code owners September 14, 2023 01:26
@llvmbot
Copy link
Member

llvmbot commented Sep 14, 2023

@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir-sparse

Changes CSR: `lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 : dense, d1 : compressed)`

CSC:
lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) -> (d1, d0)> to map = (d0, d1) -> (d1 : dense, d0 : compressed)

Patch is 54.80 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66309.diff

68 Files Affected:

  • (modified) mlir/test/Dialect/Bufferization/ops.mlir (+1-1)
  • (modified) mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_combi.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_matmul.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_matvec.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_sampled_matmul_lib.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/codegen.mlir (+2-3)
  • (modified) mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/codegen_sparse_dealloc.mlir (+2-3)
  • (modified) mlir/test/Dialect/SparseTensor/conversion.mlir (+2-3)
  • (modified) mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir (+2-3)
  • (modified) mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+12-12)
  • (modified) mlir/test/Dialect/SparseTensor/invalid_encoding.mlir (+4-4)
  • (modified) mlir/test/Dialect/SparseTensor/pack_copy.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/rewriting_for_codegen.mlir (+2-3)
  • (modified) mlir/test/Dialect/SparseTensor/semi_ring.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_2d.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_affine.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_expand.mlir (+2-3)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_extract_slice.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_lower.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_lower_col.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_lower_inplace.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_matmul_codegen.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_out.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_parallel.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_parallel_reduce.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_chain.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/specifier_to_llvm.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/spy_sddmm.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/transform-ops.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/unused-tensor.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_coo_test.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_foreach_slices.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul_slice.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack_libgen.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_prod.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_select.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-gemm-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matmul-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-sampled-matmul-lib.mlir (+1-1)
  • (modified) mlir/test/python/dialects/sparse_tensor/dialect.py (+1-2)

<pre>
diff --git a/mlir/test/Dialect/Bufferization/ops.mlir b/mlir/test/Dialect/Bufferization/ops.mlir
index 665f5697fdc5fdf..dc53e535bfe0d57 100644
--- a/mlir/test/Dialect/Bufferization/ops.mlir
+++ b/mlir/test/Dialect/Bufferization/ops.mlir
@@ -2,7 +2,7 @@
// RUN: mlir-opt %s --mlir-print-op-generic | mlir-opt | FileCheck %s

#CSR = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]
  • map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)
    }&gt;

// CHECK-LABEL: func @test_clone
diff --git a/mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir b/mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir
index 88659f8628ae70a..795e9ee5287173f 100644
--- a/mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir
+++ b/mlir/test/Dialect/Linalg/drop-unit-extent-dims.mlir
@@ -854,7 +854,7 @@ func.func @input_stays_same(%arg0 : memref&lt;?x1x?xf32, strided&lt;[?, 1, 1]&gt;&gt;, %arg1
iterator_types = [&quot;parallel&quot;, &quot;reduction&quot;]
}

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

func.func @sparse_case(%arg0: tensor&lt;8x8xf32, #CSR&gt;, %arg1: tensor&lt;8xf32&gt;) -&gt; tensor&lt;8xf32&gt; {
%0 = tensor.empty() : tensor&lt;8xf32&gt;
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_combi.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_combi.mlir
index 568487205ba3e34..0979884cbd502a5 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_combi.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_combi.mlir
@@ -3,7 +3,7 @@
// RUN: --sparsification=&quot;parallelization-strategy=dense-outer-loop&quot;
// RUN: --sparse-gpu-codegen | FileCheck %s

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

//
// CHECK-LABEL: gpu.module @sparse_kernels
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul.mlir
index b0fa5615c6c1f28..84265398d60cd87 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul.mlir
@@ -3,7 +3,7 @@
// RUN: --sparsification=&quot;parallelization-strategy=dense-outer-loop&quot;
// RUN: --sparse-gpu-codegen | FileCheck %s

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

//
// Compute matrix matrix C = AB
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir
index 125a67b78498a80..73161bdb135ca4a 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_matmul_lib.mlir
@@ -1,7 +1,7 @@
// RUN: mlir-opt %s --linalg-generalize-named-ops
// RUN: --sparsification=&quot;enable-gpu-libgen&quot; | FileCheck %s

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

//
// Compute matrix matrix C = AB
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec.mlir
index b9d33f2e2b0694f..b56f3a90aa27c34 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_matvec.mlir
@@ -3,7 +3,7 @@
// RUN: --sparsification=&quot;parallelization-strategy=dense-outer-loop&quot;
// RUN: --sparse-gpu-codegen | FileCheck %s

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

//
// Compute matrix vector y = Ax
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_sampled_matmul_lib.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_sampled_matmul_lib.mlir
index 71641f33f82bd24..3c8e4c14e0c6a26 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_sampled_matmul_lib.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_sampled_matmul_lib.mlir
@@ -19,7 +19,7 @@
iterator_types = [&quot;parallel&quot;, &quot;parallel&quot;]
}

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

// CHECK-LABEL: func.func @sparse_sampled_dd(
// CHECK-SAME: %[[VAL_0:.*]]: tensor&lt;8x8xf64, #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;&gt;,
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir
index d880a9688077bdd..7b4c48dc34105d0 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir
@@ -1,7 +1,7 @@
// RUN: mlir-opt %s --linalg-generalize-named-ops
// RUN: --sparsification=&quot;enable-gpu-libgen&quot; | FileCheck %s

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ] }&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed) }&gt;

// CHECK-LABEL: func.func @matmulCSR(
// CHECK-SAME: %[[VAL_0:.0]]: tensor&lt;8x8xf32, #{{.}}&gt;,
diff --git a/mlir/test/Dialect/SparseTensor/codegen.mlir b/mlir/test/Dialect/SparseTensor/codegen.mlir
index 5155e5ce6c45474..43d86a9f158f03c 100644
--- a/mlir/test/Dialect/SparseTensor/codegen.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen.mlir
@@ -21,7 +21,7 @@
}&gt;

#CSR = #sparse_tensor.encoding&lt;{

  • lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ],
  • map = (d0, d1) -&gt; (d0 : dense, d1 : compressed),
    crdWidth = 64,
    posWidth = 32
    }&gt;
    @@ -31,8 +31,7 @@
    }&gt;

#CSC = #sparse_tensor.encoding&lt;{

  • lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ],
  • dimToLvl = affine_map&lt;(i, j) -&gt; (j, i)&gt;
  • map = (d0, d1) -&gt; (d1 : dense, d0 : compressed)
    }&gt;

#DCSR = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir b/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir
index e1a901db5459f53..479642e5db4ed1e 100644
--- a/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen_sparse_alloc.mlir
@@ -1,6 +1,6 @@
// RUN: mlir-opt %s --sparse-tensor-codegen --canonicalize --cse | FileCheck %s

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;
#COO = #sparse_tensor.encoding&lt;{ lvlTypes = [&quot;compressed_nu&quot;, &quot;singleton&quot;]}&gt;

// CHECK-LABEL: func.func @sparse_alloc_copy_CSR(
diff --git a/mlir/test/Dialect/SparseTensor/codegen_sparse_dealloc.mlir b/mlir/test/Dialect/SparseTensor/codegen_sparse_dealloc.mlir
index 1aff486e49fb2e2..59e568dd5de6461 100644
--- a/mlir/test/Dialect/SparseTensor/codegen_sparse_dealloc.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen_sparse_dealloc.mlir
@@ -6,10 +6,9 @@
// RUN: --sparse-tensor-codegen=create-sparse-deallocs=true
// RUN: --canonicalize --cse | FileCheck %s -check-prefix=CHECK-DEALLOC

-#CSR = #sparse_tensor.encoding&lt;{ lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#CSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;
#CSC = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;],
  • dimToLvl = affine_map&lt;(i,j) -&gt; (j,i)&gt;
  • map = (d0, d1) -&gt; (d1 : dense, d0 : compressed),
    }&gt;

//
diff --git a/mlir/test/Dialect/SparseTensor/conversion.mlir b/mlir/test/Dialect/SparseTensor/conversion.mlir
index ae9e312de7f2747..f8e30872a0756c7 100644
--- a/mlir/test/Dialect/SparseTensor/conversion.mlir
+++ b/mlir/test/Dialect/SparseTensor/conversion.mlir
@@ -17,12 +17,11 @@
}&gt;

#CSR = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]
  • map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)
    }&gt;

#CSC = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;],
  • dimToLvl = affine_map&lt;(i,j) -&gt; (j,i)&gt;
  • map = (d0, d1) -&gt; (d1 : dense, d0 : compressed)
    }&gt;

#SparseTensor = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir b/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
index f2ac0c22e035ee4..4707b199222ad49 100644
--- a/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_dense2sparse.mlir
@@ -7,12 +7,11 @@
}&gt;

#CSR = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]
  • map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)
    }&gt;

#CSC = #sparse_tensor.encoding&lt;{

  • lvlTypes = [ &quot;dense&quot;, &quot;compressed&quot; ],
  • dimToLvl = affine_map&lt;(i, j) -&gt; (j, i)&gt;
  • map = (d0, d1) -&gt; (d1 : dense, d0 : compressed)
    }&gt;

#SparseTensor = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir b/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
index 7328ede697d96a9..363a63eb8ed1eca 100644
--- a/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
+++ b/mlir/test/Dialect/SparseTensor/convert_sparse2dense.mlir
@@ -8,7 +8,7 @@
}&gt;

#SparseMatrix = #sparse_tensor.encoding&lt;{

  • lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]
  • map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)
    }&gt;

#SparseTensor = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 360dfcce2ef2bab..3091b0b8505d220 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -44,7 +44,7 @@ func.func @invalid_pack_type(%values: tensor&lt;6xf64&gt;, %pos: tensor&lt;2xi32&gt;, %coord

// -----

-#CSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;], posWidth=32, crdWidth=32}&gt;
+#CSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed), posWidth=32, crdWidth=32}&gt;

func.func @invalid_pack_mis_position(%values: tensor&lt;6xf64&gt;, %coordinates: tensor&lt;6xi32&gt;)
-&gt; tensor&lt;2x100xf64, #CSR&gt; {
@@ -80,7 +80,7 @@ func.func @invalid_unpack_type(%sp: tensor&lt;100x2xf64, #SparseVector&gt;, %values: t

// -----

-#CSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;], posWidth=32, crdWidth=32}&gt;
+#CSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed), posWidth=32, crdWidth=32}&gt;

func.func @invalid_unpack_mis_position(%sp: tensor&lt;2x100xf64, #CSR&gt;, %values: tensor&lt;6xf64&gt;, %coordinates: tensor&lt;6xi32&gt;) {
// expected-error@+1 {{inconsistent number of fields between input/output}}
@@ -297,7 +297,7 @@ func.func @sparse_unannotated_insert(%arg0: tensor&lt;128xf64&gt;, %arg1: index, %arg2

// -----

-#CSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#CSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;

func.func @sparse_wrong_arity_insert(%arg0: tensor&lt;128x64xf64, #CSR&gt;, %arg1: index, %arg2: f64) {
// expected-error@+1 {{&#x27;sparse_tensor.insert&#x27; op incorrect number of coordinates}}
@@ -347,7 +347,7 @@ func.func @sparse_unannotated_compression(%arg0: memref&lt;?xf64&gt;,

// -----

-#CSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#CSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;

func.func @sparse_wrong_arity_compression(%arg0: memref&lt;?xf64&gt;,
%arg1: memref&lt;?xi1&gt;,
@@ -381,7 +381,7 @@ func.func @sparse_convert_rank_mismatch(%arg0: tensor&lt;10x10xf64, #DCSR&gt;) -&gt; tens

// -----

-#CSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#CSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;

func.func @sparse_convert_dim_mismatch(%arg0: tensor&lt;10x?xf32&gt;) -&gt; tensor&lt;10x10xf32, #CSR&gt; {
// expected-error@+1 {{unexpected conversion mismatch in dimension 1}}
@@ -632,7 +632,7 @@ func.func @invalid_select_wrong_yield(%arg0: f64) -&gt; f64 {

// -----

-#DC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#DC = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;
func.func @invalid_concat_less_inputs(%arg: tensor&lt;9x4xf64, #DC&gt;) -&gt; tensor&lt;9x4xf64, #DC&gt; {
// expected-error@+1 {{Need at least two tensors to concatenate.}}
%0 = sparse_tensor.concatenate %arg {dimension = 1 : index}
@@ -642,7 +642,7 @@ func.func @invalid_concat_less_inputs(%arg: tensor&lt;9x4xf64, #DC&gt;) -&gt; tensor&lt;9x4x

// -----

-#DC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#DC = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;
func.func @invalid_concat_dim(%arg0: tensor&lt;2x4xf64, #DC&gt;,
%arg1: tensor&lt;3x4xf64, #DC&gt;,
%arg2: tensor&lt;4x4xf64, #DC&gt;) -&gt; tensor&lt;9x4xf64, #DC&gt; {
@@ -657,7 +657,7 @@ func.func @invalid_concat_dim(%arg0: tensor&lt;2x4xf64, #DC&gt;,
// -----

#C = #sparse_tensor.encoding&lt;{map = (d0) -&gt; (d0 : compressed)}&gt;
-#DC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#DC = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;
#DCC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;, &quot;compressed&quot;]}&gt;
func.func @invalid_concat_rank_mismatch(%arg0: tensor&lt;2xf64, #C&gt;,
%arg1: tensor&lt;3x4xf64, #DC&gt;,
@@ -672,7 +672,7 @@ func.func @invalid_concat_rank_mismatch(%arg0: tensor&lt;2xf64, #C&gt;,

// -----

-#DC = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;, &quot;compressed&quot;]}&gt;
+#DC = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : compressed)}&gt;
func.func @invalid_concat_size_mismatch_dyn(%arg0: tensor&lt;?x4xf64, #DC&gt;,
...

#CSC = #sparse_tensor.encoding<{
lvlTypes = [ "dense", "compressed" ],
dimToLvl = affine_map<(i, j) -> (j, i)>
map = (d0, d1) -> (d1 : dense, d0 : compressed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this is where it gets interesting ;-)
Neat!

@yinying-lisa-li yinying-lisa-li merged commit e2e429d into llvm:main Sep 14, 2023
@yinying-lisa-li yinying-lisa-li deleted the migrate2 branch September 14, 2023 16:22
yinying-lisa-li added a commit that referenced this pull request Sep 14, 2023
**Dense**
`lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1
: dense)`
`lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>`
to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

**DCSR**
`lvlTypes = [ "compressed", "compressed" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : compressed)`

**DCSC**
`lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j)
-> (j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : compressed)`

**Block Row**
`lvlTypes = [ "compressed", "dense" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : dense)`

**Block Column**
`lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) ->
(j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : dense)`

This is an ongoing effort: #66146, #66309
yinying-lisa-li added a commit that referenced this pull request Sep 15, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: #66146, #66309, #66443
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
CSR:
`lvlTypes = [ "dense", "compressed" ]` to `map = (d0, d1) -> (d0 :
dense, d1 : compressed)`

CSC:
`lvlTypes = [ "dense", "compressed" ], dimToLvl = affine_map<(d0, d1) ->
(d1, d0)>` to `map = (d0, d1) -> (d1 : dense, d0 : compressed)`

This is an ongoing effort: llvm#66146
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
**Dense**
`lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1
: dense)`
`lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>`
to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

**DCSR**
`lvlTypes = [ "compressed", "compressed" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : compressed)`

**DCSC**
`lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j)
-> (j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : compressed)`

**Block Row**
`lvlTypes = [ "compressed", "dense" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : dense)`

**Block Column**
`lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) ->
(j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : dense)`

This is an ongoing effort: llvm#66146, llvm#66309
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

mlir:bufferization Bufferization infrastructure mlir:gpu mlir:linalg mlir:sparse Sparse compiler in MLIR mlir

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants