Skip to content

🐛 [Bug] Issue with remove_ops lowering pass in FX/Dynamo #2036

Closed
@gs-olive

Description

@gs-olive

Bug Description

When compiling the following model with is_aten=True in either FX or Dynamo, the following error is encountered.

Model

import torch

class MyModel(torch.nn.Module):
    def forward(self, x):
        x = x.permute(0, 2, 1, 3).contiguous()
        new_shape = x.size()[:-2] + (-1,)
        return x.view(new_shape)

Error:

  File "~/TensorRT/py/torch_tensorrt/fx/lower.py", line 270, in <lambda>
    trace_func=lambda module, inputs: aten_tracer.opt_trace(
  File "~/TensorRT/py/torch_tensorrt/fx/utils.py", line 150, in function_wrapper
    return f(*args, **kwargs)
  File "~/TensorRT/py/torch_tensorrt/fx/tracer/dispatch_tracer/aten_tracer.py", line 161, in opt_trace
    fx_module(*args)
  File "/usr/local/lib/python3.8/dist-packages/torch/fx/graph_module.py", line 662, in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/fx/graph_module.py", line 281, in __call__
    raise e
  File "/usr/local/lib/python3.8/dist-packages/torch/fx/graph_module.py", line 271, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _call_impl
    return forward_call(*args, **kwargs)
  File "<eval_with_key>.14", line 9, in forward
  File "/usr/local/lib/python3.8/dist-packages/torch/_ops.py", line 413, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

To Reproduce

Steps to reproduce the behavior:
Run the following script:

import torch
import torch_tensorrt

class MyModel(torch.nn.Module):
    def forward(self, x):
        x = x.permute(0, 2, 1, 3).contiguous()
        new_shape = x.size()[:-2] + (-1,)
        return x.view(new_shape)


model = MyModel().eval().cuda()
input_ = torch.rand((2, 3, 4, 5)).cuda()

# Fails
fx_compiled = torch_tensorrt.fx.compile(model,
                                        [input_],
                                        is_aten=True)

# Also Fails
dynamo_compiled = torch_tensorrt.dynamo.fx_ts_compat.compile(model,
                                                             [input_],
                                                             is_aten=True,
                                                             enabled_precisions={torch.float})

Expected behavior

The model should compile in both paths.

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0): 82631fa
  • PyTorch Version (e.g. 1.0): torch==2.1.0.dev20230608+cu118
  • CUDA version: 11.8

Additional context

For more context see: #1708 (comment). An attempted solution was implemented in that PR, which addressed the issue in remove_ops, which causes the insertion of the invalid view op.

Metadata

Metadata

Assignees

Labels

No ActivitybugSomething isn't workingcomponent: dynamoIssues relating to the `torch.compile` or `torch._dynamo.export` pathscomponent: fx

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions