Skip to content

Conversation

naoyam
Copy link
Collaborator

@naoyam naoyam commented Nov 17, 2022

Represents the 32-bit floating-point scalar value. Not supported in PyTorch, so can't be used as inputs to fusions

Intended to be used in some of the low-level optimizations like welford vectorization.

Since the code for Float and Double would be mostly identical, I created the FloatingPoint template class and defined Float and Double as:

using Float = FloatingPoint<DataType::Float>;
using Double = FloatingPoint<DataType::Double>;

@naoyam naoyam marked this pull request as ready for review November 17, 2022 21:15
Represents the 32-bit floating-point scalar value. Not supported in
PyTorch, so can't be used as inputs to fusions
@naoyam naoyam merged commit 3a6197e into devel Nov 18, 2022
@naoyam naoyam deleted the fp_scalar_type branch November 18, 2022 02:10
This was referenced Nov 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants