Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TVM 0.4 Release Note #1577

Closed
tqchen opened this issue Aug 9, 2018 · 9 comments
Closed

TVM 0.4 Release Note #1577

tqchen opened this issue Aug 9, 2018 · 9 comments

Comments

@tqchen
Copy link
Member

tqchen commented Aug 9, 2018

This release features several major improvements. The high-level graph optimizer is now part of TVM repo. Some of the highlights are: Initial support of AutoTVM for automated optimization; customized accelerator backend VTA. Please also check out tvm.ai for latest blogposts.

The community welcomes new reviewers @kazum @alex-weaver @masahi @zhreshold @PariksheetPinjari909 @srkreddy1238 @eqy, new code owner @merrymercy, and new committer @yzhliu

Change List

Tensor Expression and Optimization

  • Tensor operator primitives
    • Introduce attrs field to operator primitives(e.g. compute) to store additional metadata, the attrs can be used as hint for scheduling
  • Enable embedding of asm micro-kernels
  • Hybrid python programming model
    • python AST based IR builder interface
    • support GPU programs
  • AutoTVM, Automated tuning, and scheduling
    • basic autotvm infra
    • GPU IR verifier
    • basic autotuning tutorial
    • topi integration
  • ARM support
    • winograd support
    • initial support of ARM autotuning records
  • TOPI Vision
    • Generic GPU sort support(useful for vision)
    • SSD operator support
  • TOPI numpy consistency
    • Rename all binary operators for numpy consistecy: broadcast_add-> add, broadcast_sub -> substract, broadcast_mul -> multiply, broadcast_div->divide
    • New operators: slice, LRN, equal, not_equal, less, greater
    • tutorials on topi
  • Initial low-bit operator support support
    • Optimized popcount generation on ARM
    • general bit-serial convolution and GEMM
    • optimized low bit kernels
    • parallel optimization
  • New topi backend optimization for intel graphics
  • Adapt AVX schedules for SSE target

Backend

  • VTA: customized accelerator backend
    • custom hardware backend example
    • tutorials on how to use customized accelerator
  • Initial experimental support for HLS backend
  • Bugfix in SPIRV code generator for vulkan
  • libdevice support, enable NVPTX backend

Runtime

  • Introduce NDArrayContainer for managed NDarray
  • RPC and Device API
    • Support communication between big/small endian machines.
    • RPC and device API protocol upgrade (this is a non-backward compatible change) to support big-small endian communication. This is a non-backward compatible change, need to use the latest version of TVM runtime with the RPC
    • graduate rpc from contrib, tvm.contrib.rpc->tvm.rpc
      -Support tracker in Android RPC, add fault tolerance for AutoTVM
  • BIG.LITTLE aware threadpool
  • tvm4j graph runtime that runs end to end workload in java
  • DLPack support
    • Support from_dlpack and to_dlpack
    • Enables bridges to pytorch
  • Enable link of stackvm in runtime

NNVM

  • Tensorflow graphdef frontend
  • Keras frontend
    • improved to support reuse layers, add activations
  • ONNX
    • gather, LRN
  • CoreML frontend
    • Support C-RNN and activation functions
  • Fix grads for sum and expand_like
  • Enhanced operator fusion for multiple elemwise branches
  • Separate nnvm fusion and compilation pass

Misc

  • Unified build system to cmake, customizable cmake path for vulkan, rocm, cuda

Contributors

See the complete list here. Thanks to all the contributors to contribute to this release.

Code reviewers

Compiler

TOPI, graph optimization

Frontends

Deploy

  • @eqy rpc, thread runtime
  • @dayanandasiet android tutorials
@tqchen tqchen mentioned this issue Aug 9, 2018
37 tasks
@tqchen
Copy link
Member Author

tqchen commented Aug 9, 2018

Thanks to everyone who have pushed to last release cycle in the past three months. We would like to propose the release of v0.4 on Aug 13th.

We encourage everyone in the community to put their weights to review and vote the release. @dmlc/tvm-team

Please reply this thread on

  • Things that we missed in the release note
  • Bugfixes that need to be included in this release

@masahi
Copy link
Member

masahi commented Aug 9, 2018

Operator Fusion enhancement to nnvm is missing in the release note !

@tqchen
Copy link
Member Author

tqchen commented Aug 9, 2018

@masahi just added that

@zhiics
Copy link
Member

zhiics commented Aug 9, 2018

@tqchen fusion now is a separate pass.

@tqchen
Copy link
Member Author

tqchen commented Aug 9, 2018

@zhiics thanks for pointing this out, just added that to release note

@yzhliu
Copy link
Member

yzhliu commented Aug 10, 2018

GraphRuntime support for tvm4j - E2E inference in Java!

@liangfu
Copy link
Member

liangfu commented Aug 11, 2018

broadcast operators like not_equal, greater_equal and less_equal is now supported in both nnvm and topi.

@tqchen
Copy link
Member Author

tqchen commented Aug 13, 2018

@tqchen tqchen closed this as completed Aug 13, 2018
@apache apache locked as resolved and limited conversation to collaborators Aug 13, 2018
@tqchen
Copy link
Member Author

tqchen commented Aug 13, 2018

v0.5 roadmap is available at #1596

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants