Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Float IR node class #2197

Merged
merged 2 commits into from
Nov 18, 2022
Merged

Add Float IR node class #2197

merged 2 commits into from
Nov 18, 2022

Conversation

naoyam
Copy link
Collaborator

@naoyam naoyam commented Nov 17, 2022

Represents the 32-bit floating-point scalar value. Not supported in PyTorch, so can't be used as inputs to fusions

Intended to be used in some of the low-level optimizations like welford vectorization.

Since the code for Float and Double would be mostly identical, I created the FloatingPoint template class and defined Float and Double as:

using Float = FloatingPoint<DataType::Float>;
using Double = FloatingPoint<DataType::Double>;

@naoyam naoyam marked this pull request as ready for review November 17, 2022 21:15
Represents the 32-bit floating-point scalar value. Not supported in
PyTorch, so can't be used as inputs to fusions
torch/csrc/jit/codegen/cuda/codegen.cpp Outdated Show resolved Hide resolved
torch/csrc/jit/codegen/cuda/ir_base_nodes.cpp Outdated Show resolved Hide resolved
torch/csrc/jit/codegen/cuda/ir_cloner.cpp Outdated Show resolved Hide resolved
torch/csrc/jit/codegen/cuda/ir_interface_nodes.h Outdated Show resolved Hide resolved
@naoyam naoyam merged commit 3a6197e into devel Nov 18, 2022
@naoyam naoyam deleted the fp_scalar_type branch November 18, 2022 02:10
This was referenced Nov 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants