layout | title |
---|---|
index |
Apache TVM |
{::options parse_block_html="true" /}
{:.aboutInner}
* {:.aboutImgCol}
![Aboutimage](/assets/images/about-image.svg "aboutimage"){:.desktopImg}
![responsiveAbout](/assets/images/about-responsive-image.svg "responsiveAbout"){:.responsiveImg}
* {:.aboutDetailsCol}
#### About Apache TVM
The vision of the Apache TVM Project is to host a diverse community of experts and practitioners
in machine learning, compilers, and systems architecture to build an accessible, extensible, and
automated open-source framework that optimizes current and emerging machine learning models for
any hardware platform. TVM provides the following main features:
* Compilation of deep learning models into minimum deployable modules.
* Infrastructure to automatic generate and optimize models on more backend with better performance.
{:.key-title-text}
## Key Features & Capabilities
{:.mb-3}
Compilation and minimal runtimes commonly unlock ML workloads on existing hardware.
CPUs, GPUs, browsers, microcontrollers, FPGAs and more.
{:.mt-0.mt-lg-3}
Automatically generate and optimize tensor operators on more backends.
Need support for block sparsity, quantization (1,2,4,8 bit integers, posit), random forests/classical ML, memory planning, MISRA-C compatibility, Python prototyping or all of the above?
{:.mt-0.mt-lg-3}
TVM’s flexible design enables all of these things and more.
Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet and more. Start using TVM with Python today, build out production stacks using C++, Rust, or Java the next day.