Skip to content

Conversation

narendasan
Copy link
Collaborator

…verter

Description

Fixes some of the issues in NGC container

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@narendasan narendasan requested a review from peri044 January 29, 2025 19:36
@github-actions github-actions bot added component: tests Issues re: Tests component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: converters Issues re: Specific op converters labels Jan 29, 2025
@github-actions github-actions bot requested a review from zewenli98 January 29, 2025 19:36
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp b/tmp/changes.txt
index 267b586..c8ec197 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp
+++ b/tmp/changes.txt
@@ -134,13 +134,12 @@ auto batch_norm_registrations TORCHTRT_UNUSED =

              auto eps = static_cast<float>(args[7].unwrapToDouble(1e-5f));

-
              auto scales = at::ones(shape[1], options);
              if (!args[1].IValue()->isNone()) {
                scales = args[1].unwrapToTensor(at::ones(shape[1], options)).cpu().contiguous();
              }
              auto bias = at::zeros(shape[1], options);
-              if (!args[2].IValue()->isNone()){
+              if (!args[2].IValue()->isNone()) {
                bias = args[2].unwrapToTensor(at::zeros(shape[1], options)).cpu().contiguous();
              }
              // track_running_stats=True
@@ -170,7 +169,7 @@ auto batch_norm_registrations TORCHTRT_UNUSED =
                    so for some functionalities, users need to install correct \
                    cuDNN version by themselves. Please see our support matrix \
                    here: https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html.");
-                //return false;
+                // return false;
              }

              const int relu = 0;
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp b/tmp/changes.txt
index 267b586..c8ec197 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp
+++ b/tmp/changes.txt
@@ -134,13 +134,12 @@ auto batch_norm_registrations TORCHTRT_UNUSED =

              auto eps = static_cast<float>(args[7].unwrapToDouble(1e-5f));

-
              auto scales = at::ones(shape[1], options);
              if (!args[1].IValue()->isNone()) {
                scales = args[1].unwrapToTensor(at::ones(shape[1], options)).cpu().contiguous();
              }
              auto bias = at::zeros(shape[1], options);
-              if (!args[2].IValue()->isNone()){
+              if (!args[2].IValue()->isNone()) {
                bias = args[2].unwrapToTensor(at::zeros(shape[1], options)).cpu().contiguous();
              }
              // track_running_stats=True
@@ -170,7 +169,7 @@ auto batch_norm_registrations TORCHTRT_UNUSED =
                    so for some functionalities, users need to install correct \
                    cuDNN version by themselves. Please see our support matrix \
                    here: https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html.");
-                //return false;
+                // return false;
              }

              const int relu = 0;
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 19:36:39.024741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 19:37:06.269420+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 19:36:39.024741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 19:37:06.277587+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:36:39.025741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:37:06.325770+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 19:36:39.024741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 19:37:06.343057+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 19:36:39.027741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 19:37:06.370229+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:36:39.029741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:37:06.450615+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 19:36:39.029741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 19:37:06.454081+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 19:36:39.029741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 19:37:06.465458+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:36:39.029741+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:37:06.500315+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:36:39.494740+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:37:06.512245+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:36:39.494740+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:37:06.526084+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:36:39.494740+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:37:06.546110+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 19:36:39.525740+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 19:37:06.723737+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 19:36:39.525740+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 19:37:06.741589+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 19:36:39.525740+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 19:37:06.751327+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 19:36:39.525740+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 19:37:06.771073+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 19:36:39.525740+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 19:37:06.771929+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 19:36:39.525740+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 19:37:06.785828+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 19:36:39.534740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 19:37:07.312686+00:00
@@ -259,11 +259,11 @@
        else:
            return False

    @staticmethod
    def _parse_tensor_domain(
-        domain: Optional[Tuple[float, float]]
+        domain: Optional[Tuple[float, float]],
    ) -> Tuple[float, float]:
        """
        Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)

        Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 19:36:39.536740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 19:37:07.568443+00:00
@@ -51,17 +51,17 @@

    def _redraw(self, *, blank_lines: int = 0) -> None:
        if self._render:

            def clear_line() -> None:
-                print("\x1B[2K", end="")
+                print("\x1b[2K", end="")

            def move_to_start_of_line() -> None:
-                print("\x1B[0G", end="")
+                print("\x1b[0G", end="")

            def move_cursor_up(lines: int) -> None:
-                print("\x1B[{}A".format(lines), end="")
+                print("\x1b[{}A".format(lines), end="")

            def progress_bar(steps: int, num_steps: int) -> str:
                INNER_WIDTH = 10
                completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
                return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 19:36:39.534740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 19:37:07.817569+00:00
@@ -1198,11 +1198,11 @@
            "Provided unsupported source type for EngineCapability conversion"
        )

    @classmethod
    def try_from(
-        c: Union[trt.EngineCapability, EngineCapability]
+        c: Union[trt.EngineCapability, EngineCapability],
    ) -> Optional[EngineCapability]:
        """Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.

        Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
        If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 19:36:39.537740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 19:37:08.001864+00:00
@@ -245,11 +245,11 @@
    beta: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.HARD_SIGMOID

    def hard_sigmoid_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def hard_sigmoid_fn(x: float) -> float:
            return max(0, min(1, alpha * x + beta))

        return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
    alpha: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.THRESHOLDED_RELU

    def thresholded_relu_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def thresholded_relu_fn(x: float) -> float:
            return x if x > alpha else 0

        return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 19:36:39.541740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 19:37:09.732455+00:00
@@ -463,11 +463,11 @@
    else:
        return torch.device(device)


def to_torch_tensorrt_device(
-    device: Optional[Union[Device, torch.device, str]]
+    device: Optional[Union[Device, torch.device, str]],
) -> Device:
    """Cast a device-type to torch_tensorrt.Device

    Returns the corresponding torch_tensorrt.Device
    """
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 19:36:39.546740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 19:37:10.459191+00:00
@@ -99,11 +99,11 @@
                self.y = torch.ones(y_shape)

            def forward(self, condition):
                return torch.where(condition, self.x, self.y)

-        inputs = [(torch.randn(condition_shape) > 0)]
+        inputs = [torch.randn(condition_shape) > 0]
        self.run_test(
            Where(x_shape, y_shape),
            inputs,
            expected_ops={acc_ops.where},
            test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 19:36:39.550740+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 19:37:11.750889+00:00
@@ -515,11 +515,11 @@
    dim0 = cast(int, transpose_node.args[1])
    dim1 = cast(int, transpose_node.args[2])
    changed = False

    def _calculate_dim(
-        transpose_dim: Union[torch.fx.Node, int]
+        transpose_dim: Union[torch.fx.Node, int],
    ) -> Union[torch.fx.Node, int]:
        nonlocal transpose_input_node
        nonlocal changed
        if isinstance(transpose_dim, torch.fx.Node):
            # Transpose dim is sub node

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 19:36:39.564405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 19:37:10.655493+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 19:36:39.564405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 19:37:10.668049+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:36:39.565405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:37:10.712228+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 19:36:39.565405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 19:37:10.722646+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 19:36:39.569405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 19:37:10.765635+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:36:39.572405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:37:10.833730+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 19:36:39.572405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 19:37:10.836616+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 19:36:39.572405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 19:37:10.844011+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:36:39.572405+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:37:10.881087+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:36:40.034409+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 19:37:10.895013+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:36:40.035409+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 19:37:10.908179+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:36:40.035409+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 19:37:10.927471+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 19:36:40.065409+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 19:37:11.073668+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 19:36:40.065409+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 19:37:11.136215+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 19:36:40.066409+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 19:37:11.144777+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 19:36:40.066409+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 19:37:11.152833+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 19:36:40.066409+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 19:37:11.158252+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 19:36:40.066409+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 19:37:11.186821+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 19:36:40.075409+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 19:37:11.692124+00:00
@@ -259,11 +259,11 @@
        else:
            return False

    @staticmethod
    def _parse_tensor_domain(
-        domain: Optional[Tuple[float, float]]
+        domain: Optional[Tuple[float, float]],
    ) -> Tuple[float, float]:
        """
        Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)

        Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 19:36:40.077410+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 19:37:11.960189+00:00
@@ -51,17 +51,17 @@

    def _redraw(self, *, blank_lines: int = 0) -> None:
        if self._render:

            def clear_line() -> None:
-                print("\x1B[2K", end="")
+                print("\x1b[2K", end="")

            def move_to_start_of_line() -> None:
-                print("\x1B[0G", end="")
+                print("\x1b[0G", end="")

            def move_cursor_up(lines: int) -> None:
-                print("\x1B[{}A".format(lines), end="")
+                print("\x1b[{}A".format(lines), end="")

            def progress_bar(steps: int, num_steps: int) -> str:
                INNER_WIDTH = 10
                completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
                return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 19:36:40.075409+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 19:37:12.174794+00:00
@@ -1198,11 +1198,11 @@
            "Provided unsupported source type for EngineCapability conversion"
        )

    @classmethod
    def try_from(
-        c: Union[trt.EngineCapability, EngineCapability]
+        c: Union[trt.EngineCapability, EngineCapability],
    ) -> Optional[EngineCapability]:
        """Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.

        Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
        If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 19:36:40.078409+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 19:37:12.385590+00:00
@@ -245,11 +245,11 @@
    beta: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.HARD_SIGMOID

    def hard_sigmoid_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def hard_sigmoid_fn(x: float) -> float:
            return max(0, min(1, alpha * x + beta))

        return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
    alpha: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.THRESHOLDED_RELU

    def thresholded_relu_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def thresholded_relu_fn(x: float) -> float:
            return x if x > alpha else 0

        return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 19:36:40.082410+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 19:37:13.840024+00:00
@@ -463,11 +463,11 @@
    else:
        return torch.device(device)


def to_torch_tensorrt_device(
-    device: Optional[Union[Device, torch.device, str]]
+    device: Optional[Union[Device, torch.device, str]],
) -> Device:
    """Cast a device-type to torch_tensorrt.Device

    Returns the corresponding torch_tensorrt.Device
    """
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 19:36:40.087409+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 19:37:14.801177+00:00
@@ -99,11 +99,11 @@
                self.y = torch.ones(y_shape)

            def forward(self, condition):
                return torch.where(condition, self.x, self.y)

-        inputs = [(torch.randn(condition_shape) > 0)]
+        inputs = [torch.randn(condition_shape) > 0]
        self.run_test(
            Where(x_shape, y_shape),
            inputs,
            expected_ops={acc_ops.where},
            test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 19:36:40.091410+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 19:37:16.009485+00:00
@@ -515,11 +515,11 @@
    dim0 = cast(int, transpose_node.args[1])
    dim1 = cast(int, transpose_node.args[2])
    changed = False

    def _calculate_dim(
-        transpose_dim: Union[torch.fx.Node, int]
+        transpose_dim: Union[torch.fx.Node, int],
    ) -> Union[torch.fx.Node, int]:
        nonlocal transpose_input_node
        nonlocal changed
        if isinstance(transpose_dim, torch.fx.Node):
            # Transpose dim is sub node

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@narendasan narendasan mentioned this pull request Jan 29, 2025
7 tasks
Copy link
Contributor

@zewenli98 zewenli98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to lint. others LGTM

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp b/tmp/changes.txt
index 267b586..c8ec197 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp
+++ b/tmp/changes.txt
@@ -134,13 +134,12 @@ auto batch_norm_registrations TORCHTRT_UNUSED =

              auto eps = static_cast<float>(args[7].unwrapToDouble(1e-5f));

-
              auto scales = at::ones(shape[1], options);
              if (!args[1].IValue()->isNone()) {
                scales = args[1].unwrapToTensor(at::ones(shape[1], options)).cpu().contiguous();
              }
              auto bias = at::zeros(shape[1], options);
-              if (!args[2].IValue()->isNone()){
+              if (!args[2].IValue()->isNone()) {
                bias = args[2].unwrapToTensor(at::zeros(shape[1], options)).cpu().contiguous();
              }
              // track_running_stats=True
@@ -170,7 +169,7 @@ auto batch_norm_registrations TORCHTRT_UNUSED =
                    so for some functionalities, users need to install correct \
                    cuDNN version by themselves. Please see our support matrix \
                    here: https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html.");
-                //return false;
+                // return false;
              }

              const int relu = 0;
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 22:43:28.748747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 22:43:53.434876+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 22:43:28.748747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 22:43:53.451237+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 22:43:28.749747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 22:43:53.489996+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 22:43:28.749747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 22:43:53.503968+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 22:43:28.751747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 22:43:53.537264+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 22:43:28.754747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 22:43:53.616256+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 22:43:28.753747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 22:43:53.619172+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 22:43:28.754747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 22:43:53.621111+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 22:43:28.754747+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 22:43:53.661691+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 22:43:29.208753+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 22:43:53.668668+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 22:43:29.208753+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 22:43:53.685452+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 22:43:29.208753+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 22:43:53.705429+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 22:43:29.238754+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 22:43:53.847072+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 22:43:29.238754+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 22:43:53.895708+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 22:43:29.238754+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 22:43:53.910790+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 22:43:29.238754+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 22:43:53.913547+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 22:43:29.238754+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 22:43:53.927156+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 22:43:29.238754+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 22:43:53.942999+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 22:43:29.247754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 22:43:54.376459+00:00
@@ -259,11 +259,11 @@
        else:
            return False

    @staticmethod
    def _parse_tensor_domain(
-        domain: Optional[Tuple[float, float]]
+        domain: Optional[Tuple[float, float]],
    ) -> Tuple[float, float]:
        """
        Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)

        Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 22:43:29.249754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 22:43:54.662593+00:00
@@ -51,17 +51,17 @@

    def _redraw(self, *, blank_lines: int = 0) -> None:
        if self._render:

            def clear_line() -> None:
-                print("\x1B[2K", end="")
+                print("\x1b[2K", end="")

            def move_to_start_of_line() -> None:
-                print("\x1B[0G", end="")
+                print("\x1b[0G", end="")

            def move_cursor_up(lines: int) -> None:
-                print("\x1B[{}A".format(lines), end="")
+                print("\x1b[{}A".format(lines), end="")

            def progress_bar(steps: int, num_steps: int) -> str:
                INNER_WIDTH = 10
                completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
                return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 22:43:29.248754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 22:43:54.859980+00:00
@@ -1198,11 +1198,11 @@
            "Provided unsupported source type for EngineCapability conversion"
        )

    @classmethod
    def try_from(
-        c: Union[trt.EngineCapability, EngineCapability]
+        c: Union[trt.EngineCapability, EngineCapability],
    ) -> Optional[EngineCapability]:
        """Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.

        Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
        If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 22:43:29.250754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 22:43:55.075638+00:00
@@ -245,11 +245,11 @@
    beta: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.HARD_SIGMOID

    def hard_sigmoid_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def hard_sigmoid_fn(x: float) -> float:
            return max(0, min(1, alpha * x + beta))

        return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
    alpha: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.THRESHOLDED_RELU

    def thresholded_relu_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def thresholded_relu_fn(x: float) -> float:
            return x if x > alpha else 0

        return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 22:43:29.255754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 22:43:56.503115+00:00
@@ -463,11 +463,11 @@
    else:
        return torch.device(device)


def to_torch_tensorrt_device(
-    device: Optional[Union[Device, torch.device, str]]
+    device: Optional[Union[Device, torch.device, str]],
) -> Device:
    """Cast a device-type to torch_tensorrt.Device

    Returns the corresponding torch_tensorrt.Device
    """
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 22:43:29.260754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 22:43:57.436752+00:00
@@ -99,11 +99,11 @@
                self.y = torch.ones(y_shape)

            def forward(self, condition):
                return torch.where(condition, self.x, self.y)

-        inputs = [(torch.randn(condition_shape) > 0)]
+        inputs = [torch.randn(condition_shape) > 0]
        self.run_test(
            Where(x_shape, y_shape),
            inputs,
            expected_ops={acc_ops.where},
            test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 22:43:29.263754+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 22:43:58.457952+00:00
@@ -515,11 +515,11 @@
    dim0 = cast(int, transpose_node.args[1])
    dim1 = cast(int, transpose_node.args[2])
    changed = False

    def _calculate_dim(
-        transpose_dim: Union[torch.fx.Node, int]
+        transpose_dim: Union[torch.fx.Node, int],
    ) -> Union[torch.fx.Node, int]:
        nonlocal transpose_input_node
        nonlocal changed
        if isinstance(transpose_dim, torch.fx.Node):
            # Transpose dim is sub node

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp b/tmp/changes.txt
index 3fe9ec0..c8ec197 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp
+++ b/tmp/changes.txt
@@ -134,7 +134,6 @@ auto batch_norm_registrations TORCHTRT_UNUSED =

              auto eps = static_cast<float>(args[7].unwrapToDouble(1e-5f));

-
              auto scales = at::ones(shape[1], options);
              if (!args[1].IValue()->isNone()) {
                scales = args[1].unwrapToTensor(at::ones(shape[1], options)).cpu().contiguous();
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 23:05:16.424495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-29 23:05:42.059025+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 23:05:16.424495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-29 23:05:42.079457+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 23:05:16.425495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 23:05:42.117351+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 23:05:16.425495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-29 23:05:42.132395+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 23:05:16.427495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-29 23:05:42.161593+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 23:05:16.429495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 23:05:42.241972+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 23:05:16.429495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-29 23:05:42.242309+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 23:05:16.429495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-29 23:05:42.255066+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 23:05:16.429495+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 23:05:42.292971+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 23:05:16.876503+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-29 23:05:42.297565+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 23:05:16.876503+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-29 23:05:42.314008+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 23:05:16.876503+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-29 23:05:42.339552+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 23:05:16.906504+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-29 23:05:42.469803+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 23:05:16.906504+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-29 23:05:42.530092+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 23:05:16.906504+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-29 23:05:42.535406+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 23:05:16.906504+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-29 23:05:42.537265+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 23:05:16.906504+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-29 23:05:42.547969+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 23:05:16.906504+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-29 23:05:42.574980+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 23:05:16.915504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-29 23:05:43.049355+00:00
@@ -259,11 +259,11 @@
        else:
            return False

    @staticmethod
    def _parse_tensor_domain(
-        domain: Optional[Tuple[float, float]]
+        domain: Optional[Tuple[float, float]],
    ) -> Tuple[float, float]:
        """
        Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)

        Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 23:05:16.917504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-29 23:05:43.292123+00:00
@@ -51,17 +51,17 @@

    def _redraw(self, *, blank_lines: int = 0) -> None:
        if self._render:

            def clear_line() -> None:
-                print("\x1B[2K", end="")
+                print("\x1b[2K", end="")

            def move_to_start_of_line() -> None:
-                print("\x1B[0G", end="")
+                print("\x1b[0G", end="")

            def move_cursor_up(lines: int) -> None:
-                print("\x1B[{}A".format(lines), end="")
+                print("\x1b[{}A".format(lines), end="")

            def progress_bar(steps: int, num_steps: int) -> str:
                INNER_WIDTH = 10
                completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
                return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 23:05:16.915504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-29 23:05:43.540905+00:00
@@ -1198,11 +1198,11 @@
            "Provided unsupported source type for EngineCapability conversion"
        )

    @classmethod
    def try_from(
-        c: Union[trt.EngineCapability, EngineCapability]
+        c: Union[trt.EngineCapability, EngineCapability],
    ) -> Optional[EngineCapability]:
        """Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.

        Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
        If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 23:05:16.918504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-29 23:05:43.688805+00:00
@@ -245,11 +245,11 @@
    beta: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.HARD_SIGMOID

    def hard_sigmoid_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def hard_sigmoid_fn(x: float) -> float:
            return max(0, min(1, alpha * x + beta))

        return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
    alpha: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.THRESHOLDED_RELU

    def thresholded_relu_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def thresholded_relu_fn(x: float) -> float:
            return x if x > alpha else 0

        return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 23:05:16.922504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-29 23:05:45.150564+00:00
@@ -463,11 +463,11 @@
    else:
        return torch.device(device)


def to_torch_tensorrt_device(
-    device: Optional[Union[Device, torch.device, str]]
+    device: Optional[Union[Device, torch.device, str]],
) -> Device:
    """Cast a device-type to torch_tensorrt.Device

    Returns the corresponding torch_tensorrt.Device
    """
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 23:05:16.927504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-29 23:05:46.079855+00:00
@@ -99,11 +99,11 @@
                self.y = torch.ones(y_shape)

            def forward(self, condition):
                return torch.where(condition, self.x, self.y)

-        inputs = [(torch.randn(condition_shape) > 0)]
+        inputs = [torch.randn(condition_shape) > 0]
        self.run_test(
            Where(x_shape, y_shape),
            inputs,
            expected_ops={acc_ops.where},
            test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 23:05:16.931504+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-29 23:05:47.364080+00:00
@@ -515,11 +515,11 @@
    dim0 = cast(int, transpose_node.args[1])
    dim1 = cast(int, transpose_node.args[2])
    changed = False

    def _calculate_dim(
-        transpose_dim: Union[torch.fx.Node, int]
+        transpose_dim: Union[torch.fx.Node, int],
    ) -> Union[torch.fx.Node, int]:
        nonlocal transpose_input_node
        nonlocal changed
        if isinstance(transpose_dim, torch.fx.Node):
            # Transpose dim is sub node

@github-actions github-actions bot added documentation Improvements or additions to documentation component: build system Issues re: Build system component: api [Python] Issues re: Python API component: runtime component: fx component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Jan 29, 2025
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp b/tmp/changes.txt
index 3fe9ec0..c8ec197 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/converters/impl/batch_norm.cpp
+++ b/tmp/changes.txt
@@ -134,7 +134,6 @@ auto batch_norm_registrations TORCHTRT_UNUSED =

              auto eps = static_cast<float>(args[7].unwrapToDouble(1e-5f));

-
              auto scales = at::ones(shape[1], options);
              if (!args[1].IValue()->isNone()) {
                scales = args[1].unwrapToTensor(at::ones(shape[1], options)).cpu().contiguous();
ERROR: Some files do not conform to style guidelines

github-actions[bot]

This comment was marked as spam.

github-actions[bot]

This comment was marked as spam.

@peri044 peri044 merged commit f2a38f5 into main Jan 30, 2025
54 of 68 checks passed
peri044 added a commit that referenced this pull request Jan 30, 2025
#3367)

Co-authored-by: Dheeraj Peri <peri.dheeraj@gmail.com>
peri044 added a commit that referenced this pull request Jan 30, 2025
#3367)

Co-authored-by: Dheeraj Peri <peri.dheeraj@gmail.com>
apbose pushed a commit that referenced this pull request Feb 13, 2025
#3367)

Co-authored-by: Dheeraj Peri <peri.dheeraj@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: build system Issues re: Build system component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: core Issues re: The core compiler component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: fx component: runtime component: tests Issues re: Tests documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants