Skip to content

✨[Converter] Implement aten::layer_norm, aten::_softmax #1798

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
apbose opened this issue Mar 31, 2023 · 1 comment
Closed

✨[Converter] Implement aten::layer_norm, aten::_softmax #1798

apbose opened this issue Mar 31, 2023 · 1 comment

Comments

@apbose
Copy link
Collaborator

apbose commented Mar 31, 2023

Torch op:
layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1e-05, bool cudnn_enable=True) -> Tensor
softmax.int(Tensor self, int dim, ScalarType? dtype=None) -> Tensor
Aten op : torch.ops.aten.layer_norm.default
torch.ops.aten._softmax.default

@apbose apbose added the feature request New feature or request label Mar 31, 2023
@apbose apbose self-assigned this Mar 31, 2023
@apbose apbose changed the title ✨[Converter] Implement aten::layer_norm ✨[Converter] Implement aten::layer_norm, aten::_softmax Mar 31, 2023
@github-actions
Copy link

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants