Skip to content

Commit

Permalink
Relay/TRT Integration (whole graph only) (#54)
Browse files Browse the repository at this point in the history
* Add tensorrt backend.

Fix merge

Fix merge and clean up logs

Add BiasAdd, Concat, padding ceil mode, and clean up code

Fix formatting and remove unused headers

uncomment models

Fix bug with variable input, clean up

Don't split batch norm

Move TRT execution to TrtExecutor

Clean up

Clean up

Add paritioning

Implement graph_runtime execution for Relay/TRT

Fix bug in extern op

Fix compilation

Add EnableTrt pass to perform same modification as previous wholegraphannotator

Renable NNVM TRT

Remove SimplifyBatchnorm, add rules for converting ops

Fix format, remove unused tests

Enable multiple outputs

Fix multiple outputs

Fix activation lookup

Fix no newline at eof

Add license header. Add consistency test to models

Add method to check TRT used. Improve comments

Fix lint

Add util to check TRT version

Add if guards around TRT5.1 APIs

Add env var for workspace size, fix logger

fix build

Add TRT versioning to EnableTrt pass

Fix build error in DLR

Fix compile for DLR

Update dmlc-core, fix copyright header, undo change to includes

Remove unused headers

Fix IsTrtCompatible visitor and move op list to constructor

Add dropout to compatible ops for CheckTrtCompatible only. Add not compatible test

Add squeeze, transpose, reshape, pad, and reduce ops. Add transpose on weights workaround

Fix formatting. Add unit tests

Support transpose on weights for conv2d and dense. Support asymmetric padding. Temp fix for 1D inputs. Add units tests for all ops.

Support StridedSlice, AdaptivePooling approximation, Pytorch addmm fixer pass

Support (2,3,0,1) tranpose on weights

Allow stride to be incomplete. Support ConstantNode -> kWeight

Fix pass serialized graph by value in runtime. Allow inclusive count for strided pool

Comments, disable failign test

Fix CI lint

Removed unused variables from TrtBuilder. Add more comments

Fix build for TRT4

Add GetTrtVersion(), Move convert map to function, remove uneeded include,  make batch_size_, logger_ TrtBuilder members, check output existence

Use shared_ptr for converters. Add check for num outputs and inputs

Support image.resize

Make GetOpConverters return a shared_ptr

Clarify count inclusive padding weirdness

Use external codegen/runtime

Move to src/runtime/contrib/tensorrt. Add Save and Load methods for tensorrt module. Rename some classes

Require format to be tensorrt so that loader knows how to load

FoldConstants

Destroy engine and context after use. Store TRT weights from op converters. Formatting

Always apply ConvertLayout to NCHW

Clean up

Add ASF header

Change ObjectRef -> NodeRef

Fix lint

Fix pylint

Fix bug with scalar weights

Making TRT cmake more informative

Make tensorrt tests dependent on whether trt codegen is enabled

Add serialization test.

* Refactor EnableTRT checkers

* Fix const weight detection

* remove tensorrt_module.h, add test for multiple outputs. Use normal GetShape. Remove GetType. Add flag for additional model testing

Undo add comments to prevent conflicts

* Separate TRT from relay. Add docstrings and more comments. Move all passes to python. Remove double lookup for Execute

Formatting

Fix lint

Fix pylint

Rename codegen_tensorrt. Check registry get. Add comments

Make trt codegen off by default.

* disable for ci

* TRT codegen can be turned on independently

* Fix tests

* Fix build without runtime

* Enable AvgPool approximation

* Remove change to cmake config

* Move passes to PreprocessForTrt. Use op.name. Rename LegalizeLayoutTransform.

* Add newlin to EOF. Remove else. Reserve space for vectors

* Remove AdaptivePool2D commentted out code. Add comment for transposed weight workaround

* Rename IsCompatibleFn

* Use ++i instead of i++

* Improve incompatible messages, use string::empty, small improvements

* Use constructor to fill func_params

* Remove std::move

* Use opt level 3, add helper to check whether to run test, improve load_params

* Replace TransposeRSCKtoCKRS/KCRS with TransposeWeights4D

* Clean up VisitExpr(CallNode) for args
  • Loading branch information
Trevor Morris authored Jan 24, 2020
1 parent 1ce36ec commit ea78f1d
Show file tree
Hide file tree
Showing 13 changed files with 3,425 additions and 4 deletions.
36 changes: 32 additions & 4 deletions cmake/modules/contrib/TensorRT.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -15,22 +15,50 @@
# specific language governing permissions and limitations
# under the License.

# TensorRT Module

# TensorRT Runtime
if(USE_TENSORRT)
# Enable codegen as well
SET(USE_TENSORRT_CODEGEN ON)
if(IS_DIRECTORY ${USE_TENSORRT})
set(TENSORRT_ROOT_DIR ${USE_TENSORRT})
message(STATUS "Custom TensorRT path: " ${TENSORRT_ROOT_DIR})
endif()
find_path(TENSORRT_INCLUDE_DIR NvInfer.h HINTS ${TENSORRT_ROOT_DIR} PATH_SUFFIXES include)
find_library(TENSORRT_LIB_DIR nvinfer HINTS ${TENSORRT_ROOT_DIR} PATH_SUFFIXES lib)
find_package_handle_standard_args(TENSORRT DEFAULT_MSG TENSORRT_INCLUDE_DIR TENSORRT_LIB_DIR)
if(NOT TENSORRT_FOUND)
message(ERROR "Could not find TensorRT.")
endif()
file(GLOB TENSORRT_SRCS src/contrib/subgraph/*.cc)
message(STATUS "TENSORRT_LIB_DIR: " ${TENSORRT_LIB_DIR})
include_directories(${TENSORRT_INCLUDE_DIR})
list(APPEND RUNTIME_SRCS ${TENSORRT_SRCS})
list(APPEND TVM_RUNTIME_LINKER_LIBS ${TENSORRT_LIB_DIR})

# NNVM TRT runtime sources
file(GLOB TENSORRT_NNVM_SRCS src/contrib/subgraph/*.cc)
list(APPEND RUNTIME_SRCS ${TENSORRT_NNVM_SRCS})

# Relay TRT runtime sources
file(GLOB TENSORRT_RELAY_CONTRIB_SRC src/runtime/contrib/tensorrt/*.cc)
list(APPEND RUNTIME_SRCS ${TENSORRT_RELAY_CONTRIB_SRC})
list(APPEND RUNTIME_SRCS src/relay/backend/contrib/tensorrt/common_utils.cc)

# Set defines
set_source_files_properties(${RUNTIME_GRAPH_SRCS}
PROPERTIES COMPILE_DEFINITIONS "TVM_GRAPH_RUNTIME_TENSORRT")
endif()
# TensorRT Codegen only. This can be enabled independently of USE_TENSORRT to
# enable compilation of TensorRT modules without requiring TensorRT to be
# installed. The compiled modules will only be able to be executed using a TVM
# built with USE_TENSORRT=ON.
if(USE_TENSORRT_CODEGEN)
message(STATUS "Build with TensorRT codegen")
# Relay TRT codegen sources
file(GLOB TENSORRT_RELAY_CONTRIB_SRC src/relay/backend/contrib/tensorrt/*.cc)
list(APPEND COMPILER_SRCS ${TENSORRT_RELAY_CONTRIB_SRC})
list(APPEND COMPILER_SRCS src/runtime/contrib/tensorrt/tensorrt_module.cc)
# If runtime is enabled also, set flag for compiler srcs
if(USE_TENSORRT)
set_source_files_properties(${COMPILER_SRCS}
PROPERTIES COMPILE_DEFINITIONS "TVM_GRAPH_RUNTIME_TENSORRT")
endif()
endif()
194 changes: 194 additions & 0 deletions python/tvm/relay/tensorrt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# pylint: disable=invalid-name,arguments-differ,no-else-return,unused-argument,missing-docstring
"""
Relay TensorRT codegen.
"""
import tvm
from tvm import relay
from tvm.relay.expr import Call, Constant

from . import _transform
from .expr_functor import ExprMutator

def _bind_params(func, params):
"""
Bind the params to the expression as constants.
"""
name_dict = {}
for arg in func.params:
name = arg.name_hint
if name in name_dict:
name_dict[name] = None
else:
name_dict[name] = arg
bind_dict = {}
for k, v in params.items():
if k not in name_dict:
continue
arg = name_dict[k]
if arg is None:
raise ValueError("Multiple args in the function have name %s" % k)
bind_dict[arg] = relay.expr.const(v)
return relay.expr.bind(func, bind_dict)

class LegalizeLayoutTranform(ExprMutator):
"""
Legalize Relay layout transforms to transpose ops to simplify TensorRT conversion.
"""
def visit_call(self, expr):
visit = super().visit_call(expr)
if expr.op == tvm.relay.op.get("layout_transform"):
src_layout = expr.attrs['src_layout']
dst_layout = expr.attrs['dst_layout']
if src_layout == "NCHW" and dst_layout == "NHWC":
return relay.transpose(visit, axes=[0, 2, 3, 1])
elif src_layout == "NHWC" and dst_layout == "NCHW":
return relay.transpose(visit, axes=[0, 3, 1, 2])
elif src_layout == "HWIO" and dst_layout == "OIHW":
return relay.transpose(visit, axes=[3, 2, 0, 1])
elif src_layout == "HWOI" and dst_layout == "OIHW":
return relay.transpose(visit, axes=[2, 3, 0, 1])
# may be uneeded
elif src_layout == "HWIO" and dst_layout == "IOHW":
return relay.transpose(visit, axes=[2, 3, 0, 1])
return visit

class RemoveDropout(ExprMutator):
"""
Removes all nn.dropout from an expr.
"""
def visit_tuple_getitem(self, expr):
visit = super().visit_tuple_getitem(expr)
if visit.index != 0:
return visit
elif isinstance(visit.tuple_value, Call) and visit.tuple_value.op.name == "nn.dropout":
return visit.tuple_value.args[0]
return visit

class RemoveMultiplyByOne(ExprMutator):
"""
Removes multiply by 1.0f. This pass when followed by
RemoveRedundantTranspose is intended to remove a pattern of
Transpose([1, 0]) -> Scale(1.0f) -> Transpose([1, 0]) produced by
PyTorch's addmm operator.
"""
def visit_call(self, expr):
if expr.op.name == "multiply":
if isinstance(expr.args[1], Constant):
data = expr.args[1].data.asnumpy()
if data.shape == () and data.item() == 1.0:
return expr.args[0]
return super().visit_call(expr)

class RemoveRedundantTranspose(ExprMutator):
"""
Removes Transpose([1, 0]) followed by Transpose([1, 0]). This pass, when
preceded by with RemoveMultiplyByOne is intended to remove a pattern of
Transpose([1, 0]) -> Scale(1.0f) -> Transpose([1, 0]) produced by
PyTorch's addmm operator.
"""
def check_axes(self, axes):
return len(axes) == 2 and int(axes[0].value) == 1 and int(axes[1].value) == 0

def visit_call(self, expr):
if expr.op.name == "transpose":
if self.check_axes(expr.attrs['axes']):
if isinstance(expr.args[0], Call) and expr.args[0].op.name == "transpose":
if self.check_axes(expr.args[0].attrs['axes']):
return expr.args[0].args[0]
return super().visit_call(expr)

def PreprocessForTrt(mod):
"""Applies passes to prepare main function for TensorRT conversion.
Parameters
----------
mod: Module
The original module.
Returns
-------
mod: Module
The module modified for TensorRT.
"""
mod['main'] = LegalizeLayoutTranform().visit(mod['main'])
mod['main'] = RemoveDropout().visit(mod['main'])
mod['main'] = RemoveMultiplyByOne().visit(mod['main'])
mod['main'] = RemoveRedundantTranspose().visit(mod['main'])
return mod

def GetTrtVersion():
"""Gets the version of TensorRT that TVM is built against.
Returns
-------
ret: Tuple[int]
TensorRT version as a tuple of major, minor, and patch number. If TVM
is not built with TensorRT, an empty tuple is returned instead.
"""
return tuple(map(int, _transform.GetTrtVersion()))

def IsTrtRuntimeAvailable():
if not tvm.get_global_func("relay._transform.GetTrtVersion", True):
return False
return GetTrtVersion() != ()

def EnableTrt(mod, params=None, trt_version=None):
"""Converts the "main" function in the module into one that can be executed using
TensorRT. If any of the operators are not supported by the TensorRT
conversion, the unmodified program will be returned instead.
Parameters
----------
mod: Module
The original module.
params : dict of str to NDArray
Input parameters to the graph that do not change
during inference time. Used for constant folding.
trt_version : Optional[Tuple[int]]
Which version of TensorRT to target for partitioning as a tuple of
(major, minor, patch). If not specified, will attempt to get using
GetTrtVersion.
Returns
-------
mod: Module
The modified module which will use the TensorRT runtime if compatible.
"""
if not trt_version:
trt_version = GetTrtVersion()
# If TVM wasn't built against TRT, default to target TRT 6. Since the
# actual conversion to TRT is done at runtime, building against TRT is
# not required for compilation.
if not trt_version:
trt_version = (6, 0, 1)
assert isinstance(trt_version, (list, tuple))
assert len(trt_version) == 3

# Apply passes required for TRT
mod = relay.transform.RemoveUnusedFunctions()(mod)
mod = relay.transform.InferType()(mod)
mod = relay.transform.ConvertLayout('NCHW')(mod)
mod = PreprocessForTrt(mod)
if params:
# Bind params so that we can use FoldConstant.
mod['main'] = _bind_params(mod['main'], params)
mod = relay.transform.FoldConstant()(mod)
return _transform.EnableTrt(*trt_version)(mod)
86 changes: 86 additions & 0 deletions src/relay/backend/contrib/tensorrt/codegen_tensorrt.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

/*!
* \file src/relay/backend/contrib/tensorrt/codegen_tensorrt.cc
* \brief Implementation of TensorRT codegen APIs.
*/

#include <tvm/node/serialization.h>
#include <tvm/relay/attrs/nn.h>
#include <tvm/relay/expr_functor.h>
#include <tvm/relay/transform.h>
#include <tvm/relay/type.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>

#include <fstream>
#include <sstream>

#include "../codegen_c/codegen_c.h"

namespace tvm {
namespace relay {
namespace contrib {

/*!
* \brief Generates a TensorRTModule from a relay expression. This "compilation"
* does not require TensorRT since the actual conversion using TensorRT APIs is
* deferred until runtime. This step simply serializes the relay program into a
* string.
*/
class TensorRTModuleCodegen : public CSourceModuleCodegenBase {
public:
runtime::Module CreateCSourceModule(const NodeRef& ref) override {
std::string serialized_subgraph;
if (ref->IsInstance<FunctionNode>()) {
serialized_subgraph = SaveJSON(Downcast<Function>(ref)->body);
} else if (ref->IsInstance<relay::ModuleNode>()) {
relay::Module mod = Downcast<relay::Module>(ref);
// TODO(trevmorr): support multiple functions. It is currently not
// possible for there to be more than one TRT func, so not a problem yet.
for (const auto& it : mod->functions) {
serialized_subgraph = SaveJSON(Downcast<Function>(it.second)->body);
}
} else {
LOG(FATAL)
<< "The input ref is expected to be a Relay function or module.";
}
const PackedFunc* pf =
runtime::Registry::Get("tvm.contrib.tensorrt.create");
CHECK(pf != nullptr)
<< "tvm.contrib.tensorrt.create was not found in the registry.";
return (*pf)(serialized_subgraph);
}
};

/*!
* \brief The external compiler/codegen tool. It takes a Relay expression/module
* and compiles it into a runtime module.
*/
runtime::Module TrtCompiler(const NodeRef& ref) {
TensorRTModuleCodegen tensorrt;
return tensorrt.CreateCSourceModule(ref);
}

TVM_REGISTER_API("relay.ext.tensorrt").set_body_typed(TrtCompiler);

} // namespace contrib
} // namespace relay
} // namespace tvm
46 changes: 46 additions & 0 deletions src/relay/backend/contrib/tensorrt/common_utils.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

/*!
* \file src/relay/backend/contrib/tensorrt/common_utils.cc
* \brief Utility functions used by compilation and runtime.
*/

#include "common_utils.h"

namespace tvm {
namespace relay {
namespace contrib {

std::vector<int> GetShape(const Type& type) {
const auto* ttype = type.as<TensorTypeNode>();
CHECK(ttype);
std::vector<int> _shape;
_shape.reserve(ttype->shape.size());
for (size_t i = 0; i < ttype->shape.size(); ++i) {
auto* val = ttype->shape[i].as<IntImm>();
CHECK(val);
_shape.push_back(val->value);
}
return _shape;
}

} // namespace contrib
} // namespace relay
} // namespace tvm
Loading

0 comments on commit ea78f1d

Please sign in to comment.