Skip to content

Commit

Permalink
refactor(//cpp/bin/torchtrtc)!: Rename enabled precisions arugment to
Browse files Browse the repository at this point in the history
enable-precision

BREAKING CHANGE: This is a minor change but may cause scripts
using torchtrtc to fail. We are renaming enabled-precisions to
enable-precision since it makes more sense as the argument can
be repeated

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
  • Loading branch information
narendasan committed Feb 24, 2022
1 parent 7223fc8 commit 10957eb
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 12 deletions.
8 changes: 4 additions & 4 deletions cpp/bin/torchtrtc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ to standard TorchScript. Load with `torch.jit.load()` and run like you would run

```
torchtrtc [input_file_path] [output_file_path]
[input_specs...] {OPTIONS}
[input_specs...] {OPTIONS}
Torch-TensorRT is a compiler for TorchScript, it will compile and optimize
TorchScript programs to run on NVIDIA GPUs using TensorRT
torchtrtc is a compiler for TorchScript, it will compile and optimize
TorchScript programs to run on NVIDIA GPUs using TensorRT
OPTIONS:
OPTIONS:
-h, --help Display this help menu
Verbiosity of the compiler
Expand Down
8 changes: 4 additions & 4 deletions cpp/bin/torchtrtc/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -249,11 +249,11 @@ int main(int argc, char** argv) {
args::Flag sparse_weights(
parser, "sparse-weights", "Enable sparsity for weights of conv and FC layers", {"sparse-weights"});

args::ValueFlagList<std::string> enabled_precision(
args::ValueFlagList<std::string> enabled_precisions(
parser,
"precision",
"(Repeatable) Enabling an operating precision for kernels to use when building the engine (Int8 requires a calibration-cache argument) [ float | float32 | f32 | fp32 | half | float16 | f16 | fp16 | int8 | i8 | char ] (default: float)",
{'p', "enabled-precision"});
{'p', "enable-precision"});
args::ValueFlag<std::string> device_type(
parser,
"type",
Expand Down Expand Up @@ -501,8 +501,8 @@ int main(int argc, char** argv) {
}
}

if (enabled_precision) {
for (const auto precision : args::get(enabled_precision)) {
if (enabled_precisions) {
for (const auto precision : args::get(enabled_precisions)) {
auto dtype = parseDataType(precision);
if (dtype == torchtrt::DataType::kFloat) {
compile_settings.enabled_precisions.insert(torch::kF32);
Expand Down
8 changes: 4 additions & 4 deletions docsrc/tutorials/torchtrtc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ to standard TorchScript. Load with ``torch.jit.load()`` and run like you would r
.. code-block:: txt
torchtrtc [input_file_path] [output_file_path]
[input_specs...] {OPTIONS}
[input_specs...] {OPTIONS}
Torch-TensorRT is a compiler for TorchScript, it will compile and optimize
TorchScript programs to run on NVIDIA GPUs using TensorRT
torchtrtc is a compiler for TorchScript, it will compile and optimize
TorchScript programs to run on NVIDIA GPUs using TensorRT
OPTIONS:
OPTIONS:
-h, --help Display this help menu
Verbiosity of the compiler
Expand Down

0 comments on commit 10957eb

Please sign in to comment.