Skip to content

Commit

Permalink
feat: Makefile for trtorchrt.so example
Browse files Browse the repository at this point in the history
Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
  • Loading branch information
narendasan committed Aug 10, 2021
1 parent 8581fd9 commit c60c521
Show file tree
Hide file tree
Showing 11 changed files with 99 additions and 61 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,7 @@ py/wheelhouse
py/.eggs
notebooks/.ipynb_checkpoints/
*.cache
tests/py/data
tests/py/data
examples/**/deps/**/*
!examples/**/deps/.gitkeep
examples/trtorchrt_example/trtorchrt_example
4 changes: 2 additions & 2 deletions core/plugins/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ On a high level, TRTorch plugin library interface does the following :

- Uses TensorRT plugin registry as the main data structure to access all plugins.

- Automatically registers TensorRT plugins with empty namepsace.
- Automatically registers TensorRT plugins with empty namepsace.

- Automatically registers TRTorch plugins with `"trtorch"` namespace.

Expand Down Expand Up @@ -37,4 +37,4 @@ If you'd like to compile your plugin with TRTorch,

Once you've completed the above steps, upon successful compilation of TRTorch library, your plugin should be available in `libtrtorch_plugins.so`.

A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/TRTorch/tree/master/examples/sample_rt_app" >here</a>
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/TRTorch/tree/master/examples/trtorchrt_example" >here</a>
13 changes: 12 additions & 1 deletion docsrc/tutorials/runtime.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,15 @@ link ``libtrtorchrt.so`` in your deployment programs or use ``DL_OPEN`` or ``LD_
you can load the runtime with ``torch.ops.load_library("libtrtorchrt.so")``. You can then continue to use
programs just as you would otherwise via PyTorch API.

.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtrtorchrt.so``, check :ref:`Installation` documentation for more details.
.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtrtorchrt.so``, check :ref:`Installation` documentation for more details.

.. note:: If you are linking ``libtrtorchrt.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltrtorchrt -Wl,--as-needed`` as theres no direct symbol dependency to anything in the TRTorch runtime for most TRTorch runtime applications

An example of how to use ``libtrtorchrt.so`` can be found here: https://github.com/NVIDIA/TRTorch/tree/master/examples/trtorchrt_example

Plugin Library
---------------

In the case you use TRTorch as a converter to a TensorRT engine and your engine uses plugins provided by TRTorch, TRTorch
ships the library ``libtrtorch_plugins.so`` which contains the implementation of the TensorRT plugins used by TRTorch during
compilation. This library can be ``DL_OPEN`` or ``LD_PRELOAD`` similar to other TensorRT plugin libraries.
21 changes: 0 additions & 21 deletions examples/sample_rt_app/BUILD

This file was deleted.

36 changes: 0 additions & 36 deletions examples/sample_rt_app/README.md

This file was deleted.

14 changes: 14 additions & 0 deletions examples/trtorchrt_example/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
package(default_visibility = ["//visibility:public"])

cc_binary(
name = "trtorchrt_example",
srcs = [
"main.cpp"
],
deps = [
"//core/runtime:runtime",
"@libtorch//:libtorch",
"@libtorch//:caffe2",
"@tensorrt//:nvinfer",
],
)
14 changes: 14 additions & 0 deletions examples/trtorchrt_example/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
CXX=g++
DEP_DIR=$(PWD)/deps
INCLUDE_DIRS=-I$(DEP_DIR)/libtorch/include -I$(DEP_DIR)/trtorch/include
LIB_DIRS=-L$(DEP_DIR)/trtorch/lib -L$(DEP_DIR)/libtorch/lib # -Wl,-rpath $(DEP_DIR)/tensorrt/lib -Wl,-rpath $(DEP_DIR)/cudnn/lib64
LIBS=-Wl,--no-as-needed -ltrtorchrt -Wl,--as-needed -ltorch -ltorch_cuda -ltorch_cpu -ltorch_global_deps -lbackend_with_compiler -lc10 -lc10_cuda
SRCS=main.cpp

TARGET=trtorchrt_example

$(TARGET):
$(CXX) $(SRCS) $(INCLUDE_DIRS) $(LIB_DIRS) $(LIBS) -o $(TARGET)

clean:
$(RM) $(TARGET)
53 changes: 53 additions & 0 deletions examples/trtorchrt_example/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# trtorchrt_example

## Sample application which uses TRTorch runtime library and plugin library.

This sample is a demonstration on how to use TRTorch runtime library `libtrtorchrt.so` along with plugin library `libtrtorch_plugins.so`

In this demo, we convert two models `ConvGelu` and `Norm` to TensorRT using TRTorch python API and perform inference using `samplertapp`. In these models, `Gelu` and `Norm` layer are expressed as plugins in the network.

### Generating Torch script modules with TRT Engines

The following command will generate `conv_gelu.jit` and `norm.jit` torchscript modules which contain TensorRT engines.

```sh
python network.py
```

### `trtorchrt_example`
The main goal is to use TRTorch runtime library `libtrtorchrt.so`, a lightweight library sufficient enough to deploy your Torchscript programs containing TRT engines.

1) Download releases of LibTorch and TRTorch from https://pytorch.org and the TRTorch github repo and unpack both in the deps directory.

```sh
cd examples/trtorchrt_example/deps
// Download latest TRTorch release tar file (libtrtorch.tar.gz) from https://github.com/NVIDIA/TRTorch/releases
tar -xvzf libtrtorch.tar.gz
unzip libtorch-cxx11-abi-shared-with-deps-1.9.0+cu111.zip
```

> If cuDNN and TensorRT are not installed on your system / in your LD_LIBRARY_PATH then do the following as well
```sh
cd deps
mkdir cudnn && tar -xvzf <cuDNN TARBALL> --directory cudnn --strip-components=1
mkdir tensorrt && tar -xvzf <TensorRT TARBALL> --directory tensorrt --strip-components=1
cd ..
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/trtorch/lib:$(pwd)/deps/libtorch/lib:$(pwd)/deps/tensorrt/lib:$(pwd)/deps/cudnn/lib64:/usr/local/cuda/lib
```

This gives maximum compatibility with system configurations for running this example but in general you are better off adding `-Wl,-rpath $(DEP_DIR)/tensorrt/lib -Wl,-rpath $(DEP_DIR)/cudnn/lib64` to your linking command for actual applications

2) Build and run `trtorchrt_example`

`trtorchrt_example` is a binary which loads the torchscript modules `conv_gelu.jit` or `norm.jit` and runs the TRT engines on a random input using TRTorch runtime components. Checkout the `main.cpp` and `Makefile ` file for necessary code and compilation dependencies.

To build and run the app

```sh
cd examples/trtorchrt_example
make
# If paths are different than the ones below, change as necessary
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/trtorch/lib:$(pwd)/deps/libtorch/lib:$(pwd)/deps/tensorrt/lib:$(pwd)/deps/cudnn/lib64:/usr/local/cuda/lib
./trtorchrt_example $PWD/examples/trtorchrt_example/norm.jit
```
Empty file.
File renamed without changes.
File renamed without changes.

0 comments on commit c60c521

Please sign in to comment.