Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start working on Tensor Infer #2

Closed
tqchen opened this issue Oct 14, 2016 · 0 comments
Closed

Start working on Tensor Infer #2

tqchen opened this issue Oct 14, 2016 · 0 comments

Comments

@tqchen
Copy link
Member

tqchen commented Oct 14, 2016

No description provided.

@tqchen tqchen closed this as completed Oct 14, 2016
tqchen referenced this issue in tqchen/tvm May 26, 2018
* [SYMBOLIC] Add symbolic API

* Update Testcase to nnvm
tqchen added a commit that referenced this issue May 29, 2018
* [SYMBOLIC] Add symbolic API

* Update Testcase to nnvm
tqchen referenced this issue in tqchen/tvm Jul 6, 2018
* [SYMBOLIC] Add symbolic API

* Update Testcase to nnvm
tmoreau89 pushed a commit to tmoreau89/tvm that referenced this issue Jan 2, 2019
wweic pushed a commit to wweic/tvm that referenced this issue Feb 4, 2019
tmoreau89 pushed a commit to tmoreau89/tvm that referenced this issue Mar 22, 2019
tmoreau89 pushed a commit to tmoreau89/tvm that referenced this issue Mar 22, 2019
weberlo added a commit to weberlo/tvm that referenced this issue Jun 17, 2019
weberlo added a commit to weberlo/tvm that referenced this issue Jun 18, 2019
weberlo added a commit to weberlo/tvm that referenced this issue Jun 19, 2019
wweic added a commit to wweic/tvm that referenced this issue Nov 5, 2019
prashantsail pushed a commit to prashantsail/incubator-tvm that referenced this issue May 14, 2020
weberlo pushed a commit to weberlo/tvm that referenced this issue Jul 31, 2020
Add components for tutorial 3
rohanmukh pushed a commit to anijain2305/tvm that referenced this issue Sep 30, 2020
zhiics pushed a commit that referenced this issue Oct 3, 2020
* Change onnx importer to use dynamic upsampling3d (#3)

fix pylint

* Refactor ONNX frontend to be dynamic

Make OneHot dynamic

Support BatchMatMul with dynamically shaped inputs

fix dynamic broadcast

Add null checks to broadcast_to rel functions

fail more isolated broadcast_to test

use StructuralEqual instead of pointer comparisions in dynamic_to_static pass

add an optional weight freeze argument to onnx importer

convert onnx resize to dynamic op

add dynamic expand to onnx importer

add a shape_func for power

fix BERTSquad, lint

handle onnx graph initializer parameters more intelligently

* Dynamic ONNX importer: Upsampling and Pad (#2)

fix lint

fix Call reference

fix a type issue with expand

fix a bad test refactor

respond to review comments, fix batch matmul tests

* black format

* fix batch matmul test

* add dynamic strided slice to the onnx importer

* fix clip importer

* fix qnn tutorial

* fix bad merge, respond to review comments

* add a simple dynamic model test

* Add dynamic-shaped autopadding to convolution and pooling ops

* fix dynamic issues in a few ops

* fix pylint

* disable tests onnxrt doesn't support

* fix pytorch test

* respond to review comments

* add documentation about partially supporting dynamic shapes

Co-authored-by: Lily Orth-Smith <lorthsmith@octoml.ai>
CloudManX pushed a commit to CloudManX/incubator-tvm that referenced this issue Oct 30, 2020
* Change onnx importer to use dynamic upsampling3d (apache#3)

fix pylint

* Refactor ONNX frontend to be dynamic

Make OneHot dynamic

Support BatchMatMul with dynamically shaped inputs

fix dynamic broadcast

Add null checks to broadcast_to rel functions

fail more isolated broadcast_to test

use StructuralEqual instead of pointer comparisions in dynamic_to_static pass

add an optional weight freeze argument to onnx importer

convert onnx resize to dynamic op

add dynamic expand to onnx importer

add a shape_func for power

fix BERTSquad, lint

handle onnx graph initializer parameters more intelligently

* Dynamic ONNX importer: Upsampling and Pad (apache#2)

fix lint

fix Call reference

fix a type issue with expand

fix a bad test refactor

respond to review comments, fix batch matmul tests

* black format

* fix batch matmul test

* add dynamic strided slice to the onnx importer

* fix clip importer

* fix qnn tutorial

* fix bad merge, respond to review comments

* add a simple dynamic model test

* Add dynamic-shaped autopadding to convolution and pooling ops

* fix dynamic issues in a few ops

* fix pylint

* disable tests onnxrt doesn't support

* fix pytorch test

* respond to review comments

* add documentation about partially supporting dynamic shapes

Co-authored-by: Lily Orth-Smith <lorthsmith@octoml.ai>
CloudManX pushed a commit to CloudManX/incubator-tvm that referenced this issue Oct 30, 2020
* Change onnx importer to use dynamic upsampling3d (apache#3)

fix pylint

* Refactor ONNX frontend to be dynamic

Make OneHot dynamic

Support BatchMatMul with dynamically shaped inputs

fix dynamic broadcast

Add null checks to broadcast_to rel functions

fail more isolated broadcast_to test

use StructuralEqual instead of pointer comparisions in dynamic_to_static pass

add an optional weight freeze argument to onnx importer

convert onnx resize to dynamic op

add dynamic expand to onnx importer

add a shape_func for power

fix BERTSquad, lint

handle onnx graph initializer parameters more intelligently

* Dynamic ONNX importer: Upsampling and Pad (apache#2)

fix lint

fix Call reference

fix a type issue with expand

fix a bad test refactor

respond to review comments, fix batch matmul tests

* black format

* fix batch matmul test

* add dynamic strided slice to the onnx importer

* fix clip importer

* fix qnn tutorial

* fix bad merge, respond to review comments

* add a simple dynamic model test

* Add dynamic-shaped autopadding to convolution and pooling ops

* fix dynamic issues in a few ops

* fix pylint

* disable tests onnxrt doesn't support

* fix pytorch test

* respond to review comments

* add documentation about partially supporting dynamic shapes

Co-authored-by: Lily Orth-Smith <lorthsmith@octoml.ai>
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this issue Feb 26, 2022
Co-authored-by: ZihengJiang <ziheng@apache.org>
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this issue Mar 3, 2022
Co-authored-by: ZihengJiang <ziheng@apache.org>
zxy844288792 pushed a commit to zxy844288792/tvm that referenced this issue Mar 4, 2022
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this issue Mar 7, 2022
[SparseTIR] Constructors and Python Interface for `Axis` and `SparseBuffer` (apache#2)

* add methods for Object

* axis constructors

* methods for SparseBuffer

* put into registry

* python interface

[CherryPick][Intrinsic] lower_bound and upper_bound for binary search in Sparse TIR. (apache#483) (apache#4)

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* upd

* upd

* codegen-rule

* upd

* upd

* test

* upd

* fix

* two arguments

Co-authored-by: Zihao Ye <expye@outlook.com>

Fix AxisTree (apache#3)

* fix axis tree

* upd

[SparseTIR] Add SparseBufferLoad/SparseBufferStore (apache#5)

* Add dtype for SparseBuffer

* Add name for SparseBuffer. Remove `ndim`

* Remove namespace sparse

* Add SparseBufferLoad/Store

* Add method `ndim()`

[SparseTIR] Introduce SpIterVar (apache#6)

* [SparseTIR] Introduce SpIterVar

* Add conversion to PrimExpr

[BugFix] Fix binary search & SpIterVar (apache#7)

[BugFix] Add field `is_reduction` for SpIterVar (apache#9)

* [BugFix] Add field `is_reduction` for SpIterVar

* Formatting

[SparseTIR] Index Lowering (apache#8)

* Add StmtFunctor/ExprFunctor for SparseBufferStore/Load

* Add basic index lowering

* Finish index lowering (maybe)

* Address comments

* Convert CRLF to LF

Frontend update, demo scripts. (apache#10)

* Format and Buffer data structure (apache#1)

* [SparseTIR] Constructors and Python Interface for `Axis` and `SparseBuffer` (apache#2)

* add methods for Object

* axis constructors

* methods for SparseBuffer

* put into registry

* python interface

* [CherryPick][Intrinsic] lower_bound and upper_bound for binary search in Sparse TIR. (apache#483) (apache#4)

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* upd

* upd

* codegen-rule

* upd

* upd

* test

* upd

* fix

* two arguments

Co-authored-by: Zihao Ye <expye@outlook.com>

* Fix AxisTree (apache#3)

* fix axis tree

* upd

* Format and Buffer data structure (apache#1)

* [SparseTIR] Constructors and Python Interface for `Axis` and `SparseBuffer` (apache#2)

* add methods for Object

* axis constructors

* methods for SparseBuffer

* put into registry

* python interface

* fix axis tree

* upd

* Format and Buffer data structure (apache#1)

* [SparseTIR] Constructors and Python Interface for `Axis` and `SparseBuffer` (apache#2)

* add methods for Object

* axis constructors

* methods for SparseBuffer

* put into registry

* python interface

* [CherryPick][Intrinsic] lower_bound and upper_bound for binary search in Sparse TIR. (apache#483) (apache#4)

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* upd

* upd

* codegen-rule

* upd

* upd

* test

* upd

* fix

* two arguments

Co-authored-by: Zihao Ye <expye@outlook.com>

* Fix AxisTree (apache#3)

* fix axis tree

* upd

* [SparseTIR] Add SparseBufferLoad/SparseBufferStore (apache#5)

* Add dtype for SparseBuffer

* Add name for SparseBuffer. Remove `ndim`

* Remove namespace sparse

* Add SparseBufferLoad/Store

* Add method `ndim()`

* Format and Buffer data structure (apache#1)

* [SparseTIR] Constructors and Python Interface for `Axis` and `SparseBuffer` (apache#2)

* add methods for Object

* axis constructors

* methods for SparseBuffer

* put into registry

* python interface

* [CherryPick][Intrinsic] lower_bound and upper_bound for binary search in Sparse TIR. (apache#483) (apache#4)

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* fix

* upd

* upd

* upd

* upd

* upd

* upd

* upd

* codegen-rule

* upd

* upd

* test

* upd

* fix

* two arguments

Co-authored-by: Zihao Ye <expye@outlook.com>

* Fix AxisTree (apache#3)

* fix axis tree

* upd

* [SparseTIR] Add SparseBufferLoad/SparseBufferStore (apache#5)

* Add dtype for SparseBuffer

* Add name for SparseBuffer. Remove `ndim`

* Remove namespace sparse

* Add SparseBufferLoad/Store

* Add method `ndim()`

* [SparseTIR] Introduce SpIterVar (apache#6)

* [SparseTIR] Introduce SpIterVar

* Add conversion to PrimExpr

* [BugFix] Fix binary search & SpIterVar (apache#7)

* [BugFix] Add field `is_reduction` for SpIterVar (apache#9)

* [BugFix] Add field `is_reduction` for SpIterVar

* Formatting

* upd

* upd

Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com>

[SparseTIR] SparseBlock on C++/Python side (apache#11)

* Fix a bug in the last commit

* SparseBlock on C++ & Python side

[BugFix][SparseTIR] TVMScript Parser for Axis & SpIterVar (apache#12)

* Update `cord` and `pos`

* Fix `idtype`

* Formatting..

* Bug fix 1

* Move new special stmts

* Parser for Axis and SpIterVar

* Fix context_maintainer.py

[SparseTIR] Enhance SparseBlock to contain enough PrimFunc information (apache#13)

* Enhance SparseBlock to have enough PrimFunc info

* Remove `func_sparse_buffer_map_`

* Don't print the map uh-huh

[SparseTIR] Parser, Printer, Roundtrip (apache#14)

* SparseBlock scope handler (part 1)

* SparseBlock scope handler (part 2)

* SparseBlock scope handler (part 3)

* SparseBlock scope handler (fix 1)

* Add SparseBufferLoad/Store on Python side

* Parser for SparseBufferLoad/Store

* Add SparseBlock to Python __init__

* StmtFunctor for SparseBlock

* Ensure at least one dimension for SparseBuffer

* Make `axis` field of SpIterVar mandatory

* SparseBlock scope handler (fix 2)

* Update Axis syntax by removing `name` parameter

* Move to intrin.py

* Add filed `from_sparse` to DenseFixedAxis

* SparseTIR script printer

* Roundtrip test

* `update_symbol` bug fix

* Fix attr visit in SparseBuffer

* Define then compare in SparseBlock

* Fix printer bug for SparseBuffer

* Enable graph match for Axis and SparseBuffer

* Complete HashReduce and EqualReduce for AxisTree and SparseBuffer

* Fix typo

* Rename test

* Bug fix 1

* Bug fix 2

* Add more tests

Move tests (apache#15)

[SparseTIR] ReprPrinter for Axis and SpIterVar (apache#16)

upd (apache#17)

flatten (apache#18)

ELL and BSR correctness test scripts (apache#19)

[SparseTIR] SparseTIR Lowering (apache#20)

* Fix a previous bug of sparse-fixed SpIterVar creation

* Fix a previous bug in `GetDenseValue`

* Refactor Collector and IndexTransformer

* Construct block and loops

* Fix a previous bug which rejects DV iters in collector

* Update buffer map

* Create root block

* Fix bug of sparse-fixed SpIterVar creation

* Fix bug on SpIterVar conversion (with refactor)

* Fix bug when getting dependent SpIterVars

* Fix bug on dependency map and index lowering

* Full block read/write region

* Test version 1

* Fix bug of loop order

* Fix bug of batch-mm iterator ordering

* Update PrimFunc args to use symbolic params

* Fix bug of test "csr_element_wise"

* Fix bug of index accumulation for sparse-fixed axis

* Update correctness test

* Test structural equality

* Refactor and use Array

fix nnz cols

Add docstring for sparse tir lowering (apache#21)

* add docstring

* upd

Add more examples part 1 (sddmm) (apache#22)

* upd

* upd

* upd

[SparseTIR][Schedule] SparseBlockRV, GetSparseBlock, SparseReorder (apache#23)

* Test initialization

* Fix a stupid bug of ReprPrinter

* Add SparseBlockRV

* Schedule: GetSparseBlock

* Schedule: Reorder

[SparseTIR][Schedule] GetSpIters (apache#24)

remove hybrid script for successful compilation

Add atomic intrinsic for output nonzero inference. (apache#25)

* upd

* upd

Add "sparse" block attribute. (apache#26)

Revert "remove hybrid script for successful compilation"

This reverts commit eebd7c1.

[SparseTIR] Hack `IsAffineBinding` check (apache#27)

* [TensorIR][Schedule] Inherit block anotation upon creating new blocks

* Fix SDDMM test

* Hack IsAffineBinding for sparse blocks

Axis Dependency Tree aware code-gen and bmm example (apache#28)

* upd

* upd

* upd

* upd

* upd

* upd

* upd

* upd

* remove redundancy

* fix

* upd

* upd

Re-design Indices lowering (apache#29)

* upd

* upd

* upd

* upd

* upd

* init

* format

* fix

* revise coding-style

* format

Complete indices lowering (apache#30)

* upd

* upd

* upd

* done

* upd

* passed test

* upd

Add more docstrings and depress warnings for new lowering algorithm. (apache#31)

Refactor derived axis, frontend support of fusion. (apache#32)

* upd

* upd

* fix

Fatal bugfix and change the signature of DenseVariableAxis.  (apache#33)

Syntax simplification (apache#34)

Change the order of generated blocks for block isolation. (apache#35)

* upd

* upd

* upd

Syntax of AttachAxis for BMM (apache#36)

* upd

* upd

* upd

[SparseTIR] Add "square sum" lowering test (apache#37)

* Add square sum test

* Remove pylint comment

[BugFix] Fix offset caching in lowering (apache#38)

* Hack compact dataflow check in a dirty way

* Add two-K square sum test

* Mark skipped tests

* Fix offset saving in lowering

Fusion syntax fix + SDDMM example.  (apache#39)

Some structure change on update offsets. (apache#40)

[Refactor] SparseTIR Lowering (apache#41)

* Take out methods in Scope

* Refactor

* Refactor "match"

* Tweak scope contents

* Refactor ViewIndexInAxis

* Refactor Scope

* SDDMM tests under implementation

* Refactor block stack

* Use Map for var_map

* Extract NeedCreateNewBlock

* Simplify SpIterVarToIterVar via GetIterExtent

* Refactor NeedCreateNewBlock

* Add docstring

* Use "auto" correctly

* Minor refactor and use some move

Remove redundant analyzers (apache#42)

Support indices lowering for attach and fuse. (apache#43)

* upd

* upd

* upd

Fix irregular BMM example. (apache#44)

* upd

* upd

* upd

* upd

RGCN forward and butterfly pattern example. (apache#45)

Fused SDDMM example. (apache#46)

* upd

* wip

* fix

Fix sparse reorder after refactor (apache#47)

[Refactor] Refactor Unittest (apache#48)

* upd

* remove redundancy

[Unittest] Correctness test for benchmarking scripts (apache#49)

Bugfix and more test for axis fusion, new workload (apache#50)

* upd

* upd

upd
prateek9623 pushed a commit to prateek9623/tvm that referenced this issue May 1, 2022
cconvey pushed a commit to cconvey/tvm that referenced this issue Jun 13, 2022
cconvey pushed a commit to cconvey/tvm that referenced this issue Jun 13, 2022
jinhongyii pushed a commit to jinhongyii/tvm that referenced this issue Jun 20, 2022
Co-authored-by: ZihengJiang <ziheng@apache.org>
jinhongyii pushed a commit to jinhongyii/tvm that referenced this issue Jun 20, 2022
Hzfengsy referenced this issue in Hzfengsy/tvm Jul 30, 2022
Co-authored-by: ZihengJiang <ziheng@apache.org>
areusch pushed a commit that referenced this issue Aug 25, 2022
* Revert "[skip ci] Revert "[ci] Default to n=2 for test parallelism (#12376)" (#12413)"

This reverts commit 478b672.

* [ci] Default to n=2 for test parallelism

This is attempt #2 of #12376 which was reverted in #12413. The changes
in `plugin.py` should keep all the tests on the same node so sporadic
failures don't happen due to scheduling.

Co-authored-by: driazati <driazati@users.noreply.github.com>
mikepapadim pushed a commit to mikepapadim/tvm that referenced this issue Sep 21, 2022
Co-authored-by: ZihengJiang <ziheng@apache.org>
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this issue Nov 20, 2022
Co-authored-by: ZihengJiang <ziheng@apache.org>
xinetzone pushed a commit to daobook/tvm that referenced this issue Nov 25, 2022
* Revert "[skip ci] Revert "[ci] Default to n=2 for test parallelism (apache#12376)" (apache#12413)"

This reverts commit 478b672.

* [ci] Default to n=2 for test parallelism

This is attempt apache#2 of apache#12376 which was reverted in apache#12413. The changes
in `plugin.py` should keep all the tests on the same node so sporadic
failures don't happen due to scheduling.

Co-authored-by: driazati <driazati@users.noreply.github.com>
mehrdadh referenced this issue in mehrdadh/tvm Dec 7, 2022
[Relax][AOT] Add pass that mangles TIR PrimFunc names
zxybazh pushed a commit to zxybazh/tvm that referenced this issue Jan 20, 2023
* [IR] Introduce StructInfo

* StructInfoFunctor and Analysis Support

* [TVMScript] Parse type/shape annotation with StructInfo

* remove runtime type assign

* Remove type/shape during parsing (apache#2)

* Normalizer prep: simple checks and legacy function renaming.

* Struct info deduction in BlockBuilder.

* Two TODOs

* StructInfo Normalizer Fixes (apache#3)

* StructInfo AST Fix

* Fix Extern Func Deduction and shape mutator.

* Update VoidStructInfo & globalvar (apache#4)

* Fix passes and proper sinfo propagation.

* Refactor EraseToWellDefined to Enable Remapping

* [WIP] First stab at symbolic param tracking

* Update EraseToWellDefined to support symbolic shape return (apache#5)

* fix R.shape with ndim (apache#6)

* Remove update shape/type

* Address review comment, AnnotateTypeShape=>AnnotateStructInfo

* Update include/tvm/script/ir_builder/relax/frame.h

Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

* Address comments

* Update printer to use structinfo (apache#7)

* Update Error mechanism to prep for obj loc based reporting

* Symbolic shape aware function call return value derivation.

The main flow works as follows:
- Match and populate shape_var_map and var_map by visit each pair of
  param and call arguments.
- Call EraseToWellDefined to map the ret parameter to new result.

* [ANALYSIS] Refactor well-form to only look at struct info.

* Update comments according to reviews.

* Update include/tvm/relax/struct_info.h

Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Co-authored-by: Tianqi Chen <tqchen>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
csullivan pushed a commit to csullivan/tvm that referenced this issue Feb 7, 2023
Co-authored-by: ZihengJiang <ziheng@apache.org>
csullivan pushed a commit to csullivan/tvm that referenced this issue Feb 7, 2023
* [IR] Introduce StructInfo

* StructInfoFunctor and Analysis Support

* [TVMScript] Parse type/shape annotation with StructInfo

* remove runtime type assign

* Remove type/shape during parsing (apache#2)

* Normalizer prep: simple checks and legacy function renaming.

* Struct info deduction in BlockBuilder.

* Two TODOs

* StructInfo Normalizer Fixes (apache#3)

* StructInfo AST Fix

* Fix Extern Func Deduction and shape mutator.

* Update VoidStructInfo & globalvar (apache#4)

* Fix passes and proper sinfo propagation.

* Refactor EraseToWellDefined to Enable Remapping

* [WIP] First stab at symbolic param tracking

* Update EraseToWellDefined to support symbolic shape return (apache#5)

* fix R.shape with ndim (apache#6)

* Remove update shape/type

* Address review comment, AnnotateTypeShape=>AnnotateStructInfo

* Update include/tvm/script/ir_builder/relax/frame.h

Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

* Address comments

* Update printer to use structinfo (apache#7)

* Update Error mechanism to prep for obj loc based reporting

* Symbolic shape aware function call return value derivation.

The main flow works as follows:
- Match and populate shape_var_map and var_map by visit each pair of
  param and call arguments.
- Call EraseToWellDefined to map the ret parameter to new result.

* [ANALYSIS] Refactor well-form to only look at struct info.

* Update comments according to reviews.

* Update include/tvm/relax/struct_info.h

Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Co-authored-by: Tianqi Chen <tqchen>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
Lunderberg pushed a commit to Lunderberg/tvm that referenced this issue Mar 3, 2023
Co-authored-by: ZihengJiang <ziheng@apache.org>
Lunderberg pushed a commit to Lunderberg/tvm that referenced this issue Mar 3, 2023
* [IR] Introduce StructInfo

* StructInfoFunctor and Analysis Support

* [TVMScript] Parse type/shape annotation with StructInfo

* remove runtime type assign

* Remove type/shape during parsing (apache#2)

* Normalizer prep: simple checks and legacy function renaming.

* Struct info deduction in BlockBuilder.

* Two TODOs

* StructInfo Normalizer Fixes (apache#3)

* StructInfo AST Fix

* Fix Extern Func Deduction and shape mutator.

* Update VoidStructInfo & globalvar (apache#4)

* Fix passes and proper sinfo propagation.

* Refactor EraseToWellDefined to Enable Remapping

* [WIP] First stab at symbolic param tracking

* Update EraseToWellDefined to support symbolic shape return (apache#5)

* fix R.shape with ndim (apache#6)

* Remove update shape/type

* Address review comment, AnnotateTypeShape=>AnnotateStructInfo

* Update include/tvm/script/ir_builder/relax/frame.h

Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

* Address comments

* Update printer to use structinfo (apache#7)

* Update Error mechanism to prep for obj loc based reporting

* Symbolic shape aware function call return value derivation.

The main flow works as follows:
- Match and populate shape_var_map and var_map by visit each pair of
  param and call arguments.
- Call EraseToWellDefined to map the ret parameter to new result.

* [ANALYSIS] Refactor well-form to only look at struct info.

* Update comments according to reviews.

* Update include/tvm/relax/struct_info.h

Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>

Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn>
Co-authored-by: Tianqi Chen <tqchen>
Co-authored-by: Ruihang Lai <ruihangl@cs.cmu.edu>
mikeseven pushed a commit to mikeseven/tvm that referenced this issue Sep 27, 2023
SIM-2897: Add packaging for TVM

Approved-by: Jeffrey Uong
LeiWang1999 referenced this issue in TileLang/tvm Oct 5, 2024
[Dev] Merge the latest bitblas modification to upstream
LeiWang1999 added a commit to LeiWang1999/tvm that referenced this issue Nov 8, 2024
* base tuner

* gpu schedule

* matmul ops

* initial commit

* refactor fast dlight to bit blas

* support i8 swizzle

* int8xint2 gemm

* update keep

* update lop3 cpp test

* all low int to float16 convert

* int8 fast decoding

* float16with scale

* annotate tc layout propa

* impl tir interleve test

* impl interleave weight.

* weight only propagation

* support layout propagate recover schedule of dequantize.

* refactor testing

* enhance gemv schedule for dynamic

* dequantize matmul initilization

* [refactor] move comments to BitBLAS

* evaluate pytorch integeration

* evaluate correctness of weight only decode

* annotate mit license

* annotate apache/mit lisence

* init logger

* refactor ops test with pytest

* ladder_permutate implementation

* append tvm third party lisence

* scaling ladder permutate impl

* add storage dtype test

* implement lop3 permutation ops and related test

* support with propagate layout.

* update tvm lisence

* disable fmt in pytest

* implement cpu arch for consistency

* seperate gemv schedule and gemv_dequantize schedule.

* fix typo

* refactor quantization

* init testing.

* refactor matmul and operators

* append dequantize and test items

* reslove lisence related items

* refactor implementation

* init read me.

* integration with faster transform imp

* integerate bug fix.

* update ignore

* improve code structure.

* update mit lisence

* remove gitkeep file

* provide simple tir benchmark result.

* enhance build

* auto layout deduce

* fix default tensorize.

* update ReadMe

* update readme

* update read me

* update readme

* simple fix

* readme fix

* update codeql

* update depenabot pipeline
LeiWang1999 pushed a commit to LeiWang1999/tvm that referenced this issue Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant