Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single vs. double precision issue #1567

Open
woodward opened this issue Jan 1, 2025 · 0 comments
Open

Single vs. double precision issue #1567

woodward opened this issue Jan 1, 2025 · 0 comments

Comments

@woodward
Copy link
Contributor

woodward commented Jan 1, 2025

Nx: 0.9.2
Elixir: 1.18.1-otp-27
Erlang: 27.2
macOS 14.6.1
Nx default binary backend

Hi:
I'm having some issues with single vs. double precision for floating point math in Nx. Also note that this might not be an actual issue, but might be some misunderstanding or even user error on my part! Here's an example which illustrates the problem in iex:

iex -S mix
Erlang/OTP 27 [erts-15.2] [source] [64-bit] [smp:16:16] [ds:16:16:10] [async-threads:1]
Interactive Elixir (1.18.1) - press Ctrl+C to exit (type h() ENTER for help)

iex> x = Nx.f64(2)
  #Nx.Tensor<
    f64
    2.0
  >

iex> result_f32 = Nx.multiply(x, 0.7)
  #Nx.Tensor<
    f64
    1.399999976158142
  >

iex> result_f64 = Nx.multiply(x, Nx.f64(0.7))
  #Nx.Tensor<
    f64
    1.4
  >

iex> Nx.all_close(result_f64, Nx.f64(1.4), atol: 1.0e-14, rtol: 1.0e-14)
  #Nx.Tensor<
    u8
    1
  >

iex> Nx.all_close(result_f32, Nx.f64(1.4), atol: 1.0e-14, rtol: 1.0e-14)
  #Nx.Tensor<
    u8
    0
  >

iex> Nx.all_close(result_f32, Nx.f64(1.4), atol: 1.0e-07, rtol: 1.0e-07)
  #Nx.Tensor<
    u8
    1
  >

Note that Nx.all_close/3 confirms that result_f64 is accurate to double precision, whereas result_f32 is accurate only to single precision; i.e.,

result_f64: 1.400000000000000
result_f32: 1.399999976158142

Here's a Slack thread in the #machine-learning channel of the EEF Slack where I asked this as a question prior to filing this bug (assuming that it was user error on my part - which it may still be!) There's also an Elixir script in that thread which shows the issue. In the meantime, I can just make sure I use quantities like Nx.f64(0.7) in my code, although it would be nice to use the simpler 0.7 at some point in the future.

I know there are possibly some related GitHub issues such as #448.

Thank you very much for your help,
Greg

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant