Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LSTM #315

Closed
renxida opened this issue Jan 5, 2024 · 14 comments
Closed

LSTM #315

renxida opened this issue Jan 5, 2024 · 14 comments
Assignees

Comments

@renxida
Copy link
Contributor

renxida commented Jan 5, 2024

No description provided.

@renxida
Copy link
Contributor Author

renxida commented Jan 8, 2024

Looks like there isn't a corresponding torch lstm op. Looking into how to implement it.

@renxida renxida self-assigned this Jan 8, 2024
@vivekkhandelwal1
Copy link
Contributor

Hi @renxida, there's a corresponding LSTM op in PyTorch, see: https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html. You can add OnnxToTorch and TorchToLinalg lowering for LSTM op. This would be a good learning experience as well.

@stellaraccident
Copy link
Contributor

You'll likely want to decompose it versus converting to linalg directly. There is rarely value to a compiler to keep lstm as a named op. It is a trivial fusion.

@vivekkhandelwal1
Copy link
Contributor

You'll likely want to decompose it versus converting to linalg directly. There is rarely value to a compiler to keep lstm as a named op. It is a trivial fusion.

Yeah, that's a better approach. This might be of some help. https://github.com/pytorch/pytorch/blob/a97d00cca5c1f47e74048f110df5706669a84e6e/torch/_decomp/decompositions.py#L3135

@vivekkhandelwal1
Copy link
Contributor

Btw @stellaraccident, while importing the onnx models there's a drawback that we can't make use of the existing PyTorch decompositions.

@stellaraccident
Copy link
Contributor

Yeah, I know. There's no other way to do it though: onnx is a c++ only technology.

@vivekkhandelwal1
Copy link
Contributor

Hi @renxida, are you still working on this op?

@renxida
Copy link
Contributor Author

renxida commented Feb 12, 2024

@vivekkhandelwal1 nope currently not. Feel free to grab it.

Right now i'm adding onnx end to end test cases in @kumardeepakamd 's SHARK-TestSuite repo

@renxida renxida removed their assignment Feb 12, 2024
@kumardeepakamd
Copy link

@vivekkhandelwal1 and @renxida has this landed and can be marked completed? Are e2eshark onnx and/or pytorch tests and torch-mlir lit and torch op tests added?

@renxida
Copy link
Contributor Author

renxida commented Apr 1, 2024

This will be fixed by llvm/torch-mlir#2969

@renxida renxida mentioned this issue Jun 4, 2024
Closed
@AmosLewis
Copy link
Contributor

python ./run.py --tolerance 0.001 0.001 --cachedir /proj/gdba/shark/cache -f onnx -g models --mode onnx --report --tests onnx/models/sequencer2d_m_vaiq

sequencer2d_m_vaiq.default.onnx.torch.mlir:426:14: error: failed to legalize operation 'torch.operator' that was explicitly marked illegal
    %422:3 = torch.operator "onnx.LSTM"(%403, %181, %182, %180, %none, %412, %421) {torch.onnx.direction = "bidirectional", torch.onnx.hidden_size = 48 : si64} : (!torch.vtensor<[32,32,192],f32>, !torch.vtensor<[2,192,192],f32>, !torch.vtensor<[2,192,48],f32>, !torch.vtensor<[2,384],f32>, !torch.none, !torch.vtensor<[2,32,48],f32>, !torch.vtensor<[2,32,48],f32>) -> (!torch.vtensor<[32,2,32,48],f32>, !torch.vtensor<[2,32,48],f32>, !torch.vtensor<[2,32,48],f32>) 
             ^
sequencer2d_m_vaiq.default.onnx.torch.mlir:426:14: note: see current operation: %940:3 = "torch.operator"(%837, %363, %365, %361, %792, %888, %939) <{name = "onnx.LSTM"}> {torch.onnx.direction = "bidirectional", torch.onnx.hidden_size = 48 : si64} : (!torch.vtensor<[32,32,192],f32>, !torch.vtensor<[2,192,192],f32>, !torch.vtensor<[2,192,48],f32>, !torch.vtensor<[2,384],f32>, !torch.none, !torch.vtensor<[2,32,48],f32>, !torch.vtensor<[2,32,48],f32>) -> (!torch.vtensor<[32,2,32,48],f32>, !torch.vtensor<[2,32,48],f32>, !torch.vtensor<[2,32,48],f32>)

@renxida need relook

@renxida
Copy link
Contributor Author

renxida commented Jul 29, 2024

aw crap looks like i have to support bidirectional onnx for this model to work

@PhaneeshB
Copy link
Contributor

PhaneeshB commented Oct 7, 2024

Bidirectional support
llvm/torch-mlir#3771

merged! @pdhirajkumarprasad

@pdhirajkumarprasad
Copy link

We are not seeing this issue in models so closing this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants