Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VISITOR] New ExprFunctor, StmtFunctor Interface. Modular analysis #58

Merged
merged 7 commits into from
Mar 1, 2017

Conversation

tqchen
Copy link
Member

@tqchen tqchen commented Feb 28, 2017

No description provided.

@tqchen tqchen changed the title [ARITH] Modular Analysis check if index can be divided by value [VISITOR] New ExprFunctor, StmtFunctor Interface. Modular analysis Feb 28, 2017
@ZihengJiang
Copy link
Contributor

ZihengJiang commented Feb 28, 2017

LGTM
Now we have Visitor, Mutator, ExprFunctor, StmtFunctor:

  • Visitor: Simply traverse the IR. e.g. apply a function on every single node; detect whether a variable exist in an expression (due to the flag would only be set true once, it is ok to use Visitor)
  • Mutator: It is a mutation of the IR. e.g. IRConvertSSA
  • ExprFunctor & StmtFunctor: If you want to traverse and return some analysis results of the whole expression. If you put it as the private member of Visitor, due to the Visit method will be called recursively and the result is shared between functions, it is easy to make mistakes.

@tqchen tqchen merged commit 7133448 into master Mar 1, 2017
@tqchen
Copy link
Member Author

tqchen commented Mar 1, 2017

merge in anyway, travis seems be on outrage due to s3 problem

tqchen added a commit to tqchen/tvm that referenced this pull request May 26, 2018
tqchen added a commit to tqchen/tvm that referenced this pull request May 26, 2018
tqchen added a commit that referenced this pull request May 29, 2018
tqchen added a commit that referenced this pull request May 29, 2018
tqchen added a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
tqchen added a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this pull request Feb 26, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this pull request Mar 3, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
vinx13 pushed a commit to vinx13/tvm that referenced this pull request Mar 9, 2022
rebased

[TIR][Schedule] fix reorder/buffer_flatten & finish CPU demo (apache#59)

[CPU DEMO] Update cpu gemm demo and fix bug (apache#58)

* [TIR][Schedule] introduce parallel and fix bugs for cpu demo

* [TIR][Schedule] update cpu demo

* [TIR][Schedule] fix lint

* [TIR][Schedule] fix

rebased

[TIR][Schedule] introduce reduction block and CPU demo (apache#53)

* [TIR] reduction : split_reduction

* [TIR] reduction : split_reduction

* [TIR] reduction : fuse_reduction

* [TIR] reduction : cpu demo

* [TIR] reduction : fix

* [TIR] reduction : pattern detect remains

* [TIR] reduction : pattern detect remains

* [TIR] reduction : pattern match done

* [TIR] reduction : fix lint

* [TIR] reduction : fix

* [TIR] reduction : fix

* [TIR] reduction : fix

* [TIR] reduction : fix

* [TIR] reduction : rebased

* [TIR] reduction : rebased

[TIR][Schedule] introduce cache_read cache_write (apache#54)

* [TIR][Schedule] introduce cache_read cache_write

* [TIR][Schedule] add more comments

* [TIR][Schedule] fix problem and add comments

* [TIR][Schedule] address comments

[TIR] schedule: introduce vectorize, unroll, loop validation (apache#47)

* [TIR] vectorize : basically complete

* [TIR] vectorize&unroll : update comments&unroll

* [TIR] vectorize&unroll : rebased

* [TIR] vectorize, unroll, cpu_demo: done

* [TIR] vectorize, unroll, cpu_demo: simplify

* [TIR] vectorize, unroll, cpu_demo: fix

* [TIR] reduction : rebased

* [TIR] reduction : fix

[TIR][Schedule] fix sref and scopes problem during replace and compute_at (apache#50)

* [TIR][Schedule] fix sref and scopes problem during replace and compute_at

* [TIR][Schedule] fix

* [TIR][Schedule] fix

[TIR][Refactor] move function to ScheduleNode

[TIR] Schedule: introduce primitive compute_at (apache#36)

* [TIR] Schedule: introduce primitive compute_at

* [TIR] Schedule: address comments

* [TIR] Schedule: address comments

* [TIR] Schedule: address comments

* [TIR] Schedule: add check to compute_at

* [TIR] Schedule: address comments

* [TIR] Schedule: address comments

[TIR] Schedule: introduce primitive reorder (apache#37)

* [Schedule] debug

* [TIR] Schedule: reorder, loop type detect remains

* [TIR] reorder complete

* [TIR] reorder complete

* [TIR] fix

* [TIR] reorder : rebased complete

* [TIR] reorder : fix container.h

* [TIR] reorder : fix

* [TIR] reorder : fix

* [TIR] reorder : fix

* [TIR] reorder : simplify

* [TIR] reorder : simplify

* [TIR] reorder : simplify

* [TIR] reorder : fix

* [TIR] reorder : fix

* [TIR] reorder : rebased

* [TIR] reorder : rebased

rebase

[TIR] Schedule: introduce BlockRealize and Block SRef reuse(apache#39)

* [TIR] BlockRealize: schedule refactor

* [TIR] BlockRealize: debug

* [TIR] BlockRealize finish

* [TIR] BlockRealize finish

* [TIR] BlockRealize fix

* [TIR] BlockRealize update test

* [TIR] BlockRealize: add loop var reuse

* [TIR] BlockRealize: add loop var reuse

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

* [TIR] BlockRealize: fix

[TIR] compare for module (apache#38)

* [TIR] compare for module

* [TIR] fix

* [TIR] fix

* [TIR] fix

* [TIR] fix

* [TIR] fix

* [TIR] fix

[Hybrid] Module init

[Hybrid] Module print

[Hybrid] Module print with meta

[Hybrid] adjust

[Hybrid] finished but without lint and comment check

[Hybrid] fix lint

[Hybrid] comments

[Hybrid] fix script decoration API

[Hybrid] using IRModule

[Hybrid] fix

[Hybrid] adjust API

[Hybrid] fix

[Hybrid] fix

[Hybrid] fix

[Hybrid] fix symbol table, adjust API, introduce meta_mutator and resolve import issue

[Hybrid] fix lint

[TIR] introduce pass BufferFlatten (apache#32)

* [TIR] introduce pass BufferFlatten

* [Tir] add comments & remove old TeLower

* [TIR] split GatherRegion and BufferFlatten to two Visitor/Mutator

* [TIR] address comments: Only consider stmt scope

* [TIR] BufferFlatten: address comments

* [TIR] BufferFlatten: fold BlockFlattener into BufferFlattener

* [TIR] BufferFlatten: add asserts

* [TIR] BufferFlatten: use Equal in testcase

* [TIR] Equal Pass: Enhanced the pass

* [TIR] Equal Pass: add comments

[Hybrid] refactor using Doc, introduce annotation, enhance parser (apache#28)

* [Hybrid] refactor printer, enhance parser

* [Hybrid] refactor

* [Hybrid] fix

* [Hybrid] fix

* [Hybrid] fix namespace issue

* [Hybrid] compare using Equal

[TIR] rebased

[TE] fix replace again and add primitive fuse and split (apache#27)

* [TE] add: schedule primitive fuse

* [TE] add: schedule primitive split

* [TE] address comments: add IRSubstitueInScope and other minor fix

* [TE] address comments: Enhance Equal api and fix split by nparts

* [TE] address comments

[Hybrid] introduce printer (apache#25)

* [Hybrid] substitute Block with SeqStmt, change block() syntax

* [Hybrid] add printer, type declare intrin

* [Hybrid] refactor

* [Hybrid] meta

* [Hybrid] refactor

* [Hybrid] macro

[TE] fix replace (apache#23)

* [TE] fix replace

* [TE] fix replace: add more tests

* [TE] fix replace: add more tests

[TE] rebased

[Hybrid] python syntax parser (apache#20)

* [Hybrid] python syntax parser

* [Hybrid] add a testcase

* [Hybrid] improve comments and fix bugs

* [Hybrid] improve comments, refactor __internal_assert, add new testcases

* [Hybrid] improve error report message, refactor intrin

* [Hybrid] separate ScopeEmitter from parser

* [Hybrid] refactor type check

* [Hybrid] refactor intrin

* [Hybrid] refactor intrin, allow register external functions with argument type checking, add a testcase

* [Hybrid] address comments, fix a bug in te/ir.h

* [Hybrid] remove type check

* [Hybrid] python syntax parser

* [Hybrid] add a testcase

* [Hybrid] improve comments and fix bugs

* [Hybrid] improve comments, refactor __internal_assert, add new testcases

* [Hybrid] improve error report message, refactor intrin

* [Hybrid] separate ScopeEmitter from parser

* [Hybrid] refactor type check

* [Hybrid] refactor intrin

* [Hybrid] refactor intrin, allow register external functions with argument type checking, add a testcase

* [Hybrid] address comments, fix a bug in te/ir.h

* [Hybrid] remove type check

* [Hybrid] refactor intrin, scope_handler, special_stmt

* [Hybrid] address comments

* [Hybrid] clean code, improve error reporting & testcase

* [Hybrid] clean code

* [Hybrid] clean code

[IR] introduce dependency graph and write map

[TE] refactor and clean codebase

[TE] refactor IR

[TE] introduce schedule, dependency graph and support fuse and split (apache#17)

* fix lint

* introduce dependency graph

* enable create schedule

* support get axes

* fix lint

* revert Set

* add schedule primitive fuse

* address comment

* support split

[IR] Introduce SeqStmt

add TeLower pass and enable to run Te IR (apache#15)

* add function data structure
add TeLower pass to transform Te to current IR
enable to run Te IR

* address comments

* unify terminology

TensorIR data structure init (apache#14)

* init te data structure

* finish printer and enhanced ir_builder

* address the comments

Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com>
jinhongyii pushed a commit to jinhongyii/tvm that referenced this pull request Jun 20, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
jinhongyii pushed a commit to jinhongyii/tvm that referenced this pull request Jun 20, 2022
…er (apache#76)

* [CI] Set up CI; format and lint relax code to pass CI (apache#72)

* init

* fix lint

* update task_lint

* more lint

* more lint

* lint

* jenkinsfile

* jenkinsfile

* run relax only tests

* python3.7 for pytest

* point to personal ci-cpu docker

* docker pull

* test

* fix cmake config

* update

* update

* rebase

* rebase

* AutoTIR integration (apache#58)

* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments

* Yuchen's change

* relax ConstantNode in parser and printer

* Add constant data in the metasection

* rebase

* Support ir_module(metadata=json_str)

* update test case

* remove print info

* Update tests

* clang-format

* pylint

* fix ci

* Save a copy of metadata in RelaxTransformer

* Fix comments

* fix comments

Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
cyx-6 pushed a commit to cyx-6/tvm that referenced this pull request Jul 2, 2022
junrushao added a commit to cyx-6/tvm that referenced this pull request Jul 4, 2022
cyx-6 pushed a commit to cyx-6/tvm that referenced this pull request Jul 13, 2022
Hzfengsy pushed a commit to Hzfengsy/tvm that referenced this pull request Jul 30, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
Hzfengsy pushed a commit to Hzfengsy/tvm that referenced this pull request Jul 30, 2022
…er (apache#76)

* [CI] Set up CI; format and lint relax code to pass CI (apache#72)

* init

* fix lint

* update task_lint

* more lint

* more lint

* lint

* jenkinsfile

* jenkinsfile

* run relax only tests

* python3.7 for pytest

* point to personal ci-cpu docker

* docker pull

* test

* fix cmake config

* update

* update

* rebase

* rebase

* AutoTIR integration (apache#58)

* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments

* Yuchen's change

* relax ConstantNode in parser and printer

* Add constant data in the metasection

* rebase

* Support ir_module(metadata=json_str)

* update test case

* remove print info

* Update tests

* clang-format

* pylint

* fix ci

* Save a copy of metadata in RelaxTransformer

* Fix comments

* fix comments

Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
Hzfengsy pushed a commit to Hzfengsy/tvm that referenced this pull request Jul 30, 2022
areusch pushed a commit to areusch/tvm that referenced this pull request Sep 20, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
areusch pushed a commit to areusch/tvm that referenced this pull request Sep 20, 2022
…er (apache#76)

* [CI] Set up CI; format and lint relax code to pass CI (apache#72)

* init

* fix lint

* update task_lint

* more lint

* more lint

* lint

* jenkinsfile

* jenkinsfile

* run relax only tests

* python3.7 for pytest

* point to personal ci-cpu docker

* docker pull

* test

* fix cmake config

* update

* update

* rebase

* rebase

* AutoTIR integration (apache#58)

* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments

* Yuchen's change

* relax ConstantNode in parser and printer

* Add constant data in the metasection

* rebase

* Support ir_module(metadata=json_str)

* update test case

* remove print info

* Update tests

* clang-format

* pylint

* fix ci

* Save a copy of metadata in RelaxTransformer

* Fix comments

* fix comments

Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
gigiblender pushed a commit to gigiblender/tvm that referenced this pull request Nov 3, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
gigiblender pushed a commit to gigiblender/tvm that referenced this pull request Nov 3, 2022
…er (apache#76)

* [CI] Set up CI; format and lint relax code to pass CI (apache#72)

* init

* fix lint

* update task_lint

* more lint

* more lint

* lint

* jenkinsfile

* jenkinsfile

* run relax only tests

* python3.7 for pytest

* point to personal ci-cpu docker

* docker pull

* test

* fix cmake config

* update

* update

* rebase

* rebase

* AutoTIR integration (apache#58)

* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments

* Yuchen's change

* relax ConstantNode in parser and printer

* Add constant data in the metasection

* rebase

* Support ir_module(metadata=json_str)

* update test case

* remove print info

* Update tests

* clang-format

* pylint

* fix ci

* Save a copy of metadata in RelaxTransformer

* Fix comments

* fix comments

Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this pull request Nov 20, 2022
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this pull request Nov 20, 2022
…er (apache#76)

* [CI] Set up CI; format and lint relax code to pass CI (apache#72)

* init

* fix lint

* update task_lint

* more lint

* more lint

* lint

* jenkinsfile

* jenkinsfile

* run relax only tests

* python3.7 for pytest

* point to personal ci-cpu docker

* docker pull

* test

* fix cmake config

* update

* update

* rebase

* rebase

* AutoTIR integration (apache#58)

* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments

* Yuchen's change

* relax ConstantNode in parser and printer

* Add constant data in the metasection

* rebase

* Support ir_module(metadata=json_str)

* update test case

* remove print info

* Update tests

* clang-format

* pylint

* fix ci

* Save a copy of metadata in RelaxTransformer

* Fix comments

* fix comments

Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
yzh119 added a commit to yzh119/tvm that referenced this pull request Nov 22, 2022
yelite pushed a commit to yelite/tvm that referenced this pull request Feb 17, 2023
* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments
yelite pushed a commit to yelite/tvm that referenced this pull request Feb 17, 2023
…er (apache#76)

* [CI] Set up CI; format and lint relax code to pass CI (apache#72)

* init

* fix lint

* update task_lint

* more lint

* more lint

* lint

* jenkinsfile

* jenkinsfile

* run relax only tests

* python3.7 for pytest

* point to personal ci-cpu docker

* docker pull

* test

* fix cmake config

* update

* update

* rebase

* rebase

* AutoTIR integration (apache#58)

* [WIP] Basic task extraction mechanism is implemented.

* [WIP] For gradual integration with Relay pipeline, meta_schedule/integration.py is created for relax to avoid potential conflict.

* support tir tuning and injection mode

* Add target field for Relax Extracted Task

* 1. Create relax namespace/tvm objects/... for metaschedule to preserve relay support. 2. Promote target field from Optional<Target> to Target

* Support ApplyHistoryBest

* Reflect feedback from Yuchen

* minor improvement and fix linter issue

* add ASF header

* Reorganize file structure

* fix lint errors

* remove the import-outside-toplevel

* Reflect comments

* remove redundant comment

* As per discussion w/ Yuchen, ApplyHistoryBest is introduced as a Relax transformation pass.

* remove redundant print msg

* fix lint

* reflect comments

* Yuchen's change

* relax ConstantNode in parser and printer

* Add constant data in the metasection

* rebase

* Support ir_module(metadata=json_str)

* update test case

* remove print info

* Update tests

* clang-format

* pylint

* fix ci

* Save a copy of metadata in RelaxTransformer

* Fix comments

* fix comments

Co-authored-by: Yuchen Jin <yuchenj@cs.washington.edu>
Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com>
vinx13 pushed a commit to vinx13/tvm that referenced this pull request Mar 27, 2023
Change to octo utility so that it will autopopulate available external
libs, so far we only check for `thrust`.

I also disable epilogue fusion of matmul for now since it restricts
batch size to 1.
masahi pushed a commit to masahi/tvm that referenced this pull request Mar 13, 2024
elvin-n pushed a commit to Deelvin/tvm that referenced this pull request Mar 19, 2024
krishnaraj36 pushed a commit to krishnaraj36/tvm_mainline that referenced this pull request Aug 9, 2024
…che#58)

This commit adds support for limited support for texture based group
convolution where the `in_channels` after accounting for `texture dim`
is divisible by `group size`. Otherwise, and if there was no extra
texture dim then it use default compute.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants