forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Memhammer] Meta Schedule Rules #7
Closed
Closed
Commits on Dec 30, 2021
-
[Meta Schedule][M3c] Schedule Rules, Mutator & Postprocs (apache#485) [Meta Schedule][M3c] PostOrderApply (apache#486) Fix Post Order Apply (apache#490) [MetaSchedule] Relay Integration (apache#489) [M3c][Meta Schedule] Add Trace Correctness Test for PostOrderApply (apache#492) Fix replay trace. (apache#493) [M3c][Meta Schedule] Implement the Replay Func class. (apache#495) [PR] Test script for meta-schedule task extraction. Interface to load… (apache#494) [Meta Schedule Refactor] Get child blocks (apache#500) Read-at && Write-at (apache#497) [M3c][Meta Schedule] Measure Callbacks (apache#498) [Bug] Fix Infinite Loop Caused When Calling Methods Not Overrided In PyClass (apache#496) [MetaSchedule] Sample-Perfect-Tile (apache#501) [MetaSchedule] TE Workloads (apache#502) [TensorIR] GetProducer, GetConsumer (apache#506) [MetaScheduleRefactor] Annotate&Unannotate (apache#505) [MetaSchedule] Multi-Level-Tiling & Auto-Inline (apache#503) [Tests] Add unittests for auto-inline and multi-level-tiling (apache#508) [Meta Schedule] Minor Fixes (apache#507) [MetaSchedule] Rewrite Cooperative-Fetching / Unbound-Block / Reduction-Block (apache#509) [MetaSchedule] Rewrite Parallel-Vectorize-Unroll / Verify-GPU / Disallow-Dynamic-Loops (apache#499) [Meta Schedule] Add Helper Function & Minor Modification (apache#512) [MetaSchedule] Test for Rewrite Parallel-Vectorize-Unroll (apache#513) [Meta Schedule] Feature Extractor & Cost Model (apache#510) Blockize & Tensorize (apache#514) Layout Rewriting: Suggest-Index-Map (apache#520) [MetaSchedule] Parallel-Vectorize-Unroll & Random-Compute-Location (apache#516) [Meta Schedule] Per-Store-Feature (apache#521) Add traced schedule for blockize & tensorize (apache#526) [Meta Schedule] Add XGBoost Model & Random Model (apache#519) User-Interface: Tune-TIR (apache#525) User-Interface: Tune-TE (apache#527) [Minor] More logging on python (apache#528) Get CUDA tuning working (apache#529) [MetaSchedule] TensorRT BYOC (apache#518) [BugFix] LocalBuilder API (apache#531) [Meta Schedule] Add Cost Model Update Measure Callback (apache#530) [Bugfix] BuilderInput with default params (apache#532) [MetaSchedule] Mutator-Tile-Size, Mutate-Parallel, Mutate-Unroll (apache#534) [Meta Schedule] Evolutionary Search (apache#522) [BugFix] Remove duplicated definition of MakeMultinomialSampler (apache#535) [Meta Schedule] Fix some bugs (apache#537) Initiate Experiments for CPU Performance Alignment with Ansor (apache#538) [Meta Schedule] Tweak experiment scripts (apache#539) [Meta Schedule] Initiate experiments on CUDA (apache#540) [TIR][Schedule] Buffer transform (apache#523) Auto Tensor Core (apache#524) Working on Evo Search (apache#542) [Meta Schedule] Add Replay Tuning Interface (apache#543) Evolutionary Search on CPU (apache#544) Misc improvement over the error message (apache#545) [TIR][Schedule] Software pipelining (apache#533) [Meta Schedule Refactor] fixing unit tests (apache#547) [MetaSchedule] Mutator-Compute-Location (apache#548) Misc Improvement of Evolutionary Search (apache#549) Hotfix for software pipeline (apache#552) Misc Improvement (apache#550) [Cherry-Pick][TensorIR] Primitive "SetScope" (apache#9738) (apache#555) Rule RFactor (apache#551) [MemHammer] Rewrite Rules (apache#554) [MetaSchedule] Schedule Rule: Cross-Thread Reduction (apache#556) [MetaSchedule] Performance Alignment - NRM and SFM (CUDA) (apache#559) [MetaSchedule] Perf Alignment - NRM on CUDA (apache#560) [TIR] Reorder the block iters of the blocks generated by RFactor (apache#561) Co-authored-by: Siyuan Feng <Hzfengsy@sjtu.edu.cn> Co-authored-by: Bohan Hou <32121147+spectrometerHBH@users.noreply.github.com> Co-authored-by: Hongyi Jin <3231950289@qq.com> Co-authored-by: Ruihang Lai <lairuihangdongdong@qq.com> Co-authored-by: Junru Shao <junrushao1994@gmail.com> Co-authored-by: Wuwei Lin <wuwei@apache.org> Co-authored-by: Sunghyun Park <49998730+sunggg@users.noreply.github.com> Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
Configuration menu - View commit details
-
Copy full SHA for ae4b33d - Browse repository at this point
Copy the full SHA ae4b33dView commit details -
Configuration menu - View commit details
-
Copy full SHA for f8fc975 - Browse repository at this point
Copy the full SHA f8fc975View commit details
Commits on Dec 31, 2021
-
[MemHammer] Lower Pass + Unittests (apache#557)
* format new auto padding algorithm address comment revert black address comment address comment format finally over rename auto padding tmp make gemm work minor auto padder + mutator (undone) * add new line * address comment
Configuration menu - View commit details
-
Copy full SHA for ee94a73 - Browse repository at this point
Copy the full SHA ee94a73View commit details
Commits on Jan 1, 2022
-
Perf Align: Remove Auto-inline before Multi-level-tiling (apache#564)
* minor change * revert
Configuration menu - View commit details
-
Copy full SHA for 7474fb4 - Browse repository at this point
Copy the full SHA 7474fb4View commit details
Commits on Jan 2, 2022
-
Configuration menu - View commit details
-
Copy full SHA for b0fb8af - Browse repository at this point
Copy the full SHA b0fb8afView commit details
Commits on Jan 5, 2022
-
Configuration menu - View commit details
-
Copy full SHA for a81e440 - Browse repository at this point
Copy the full SHA a81e440View commit details
Commits on Jan 6, 2022
-
Configuration menu - View commit details
-
Copy full SHA for e16003a - Browse repository at this point
Copy the full SHA e16003aView commit details -
[Meta schedule] improve search space (#1)
* meta schedule perf align: misc improvement for search space * fix unittest * remove a log(info) * code review * update member name * init_max_fail_count to init_min_unmeasured Co-authored-by: Junru Shao <junrushao1994@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for c2f8106 - Browse repository at this point
Copy the full SHA c2f8106View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0f3892b - Browse repository at this point
Copy the full SHA 0f3892bView commit details
Commits on Jan 10, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 3d8c570 - Browse repository at this point
Copy the full SHA 3d8c570View commit details -
Configuration menu - View commit details
-
Copy full SHA for b138d63 - Browse repository at this point
Copy the full SHA b138d63View commit details -
Configuration menu - View commit details
-
Copy full SHA for 64601b9 - Browse repository at this point
Copy the full SHA 64601b9View commit details -
Configuration menu - View commit details
-
Copy full SHA for 58aaba1 - Browse repository at this point
Copy the full SHA 58aaba1View commit details -
Configuration menu - View commit details
-
Copy full SHA for ed8c5cb - Browse repository at this point
Copy the full SHA ed8c5cbView commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.