You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi:
I'm having some issues with single vs. double precision for floating point math in Nx. Also note that this might not be an actual issue, but might be some misunderstanding or even user error on my part! Here's an example which illustrates the problem in iex:
Here's a Slack thread in the #machine-learning channel of the EEF Slack where I asked this as a question prior to filing this bug (assuming that it was user error on my part - which it may still be!) There's also an Elixir script in that thread which shows the issue. In the meantime, I can just make sure I use quantities like Nx.f64(0.7) in my code, although it would be nice to use the simpler 0.7 at some point in the future.
I know there are possibly some related GitHub issues such as #448.
Thank you very much for your help,
Greg
The text was updated successfully, but these errors were encountered:
Nx: 0.9.2
Elixir: 1.18.1-otp-27
Erlang: 27.2
macOS 14.6.1
Nx default binary backend
Hi:
I'm having some issues with single vs. double precision for floating point math in Nx. Also note that this might not be an actual issue, but might be some misunderstanding or even user error on my part! Here's an example which illustrates the problem in
iex
:Note that
Nx.all_close/3
confirms thatresult_f64
is accurate to double precision, whereasresult_f32
is accurate only to single precision; i.e.,Here's a Slack thread in the #machine-learning channel of the EEF Slack where I asked this as a question prior to filing this bug (assuming that it was user error on my part - which it may still be!) There's also an Elixir script in that thread which shows the issue. In the meantime, I can just make sure I use quantities like
Nx.f64(0.7)
in my code, although it would be nice to use the simpler0.7
at some point in the future.I know there are possibly some related GitHub issues such as #448.
Thank you very much for your help,
Greg
The text was updated successfully, but these errors were encountered: