Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prod(::Vector{BigInt}) broken on 1.8 and master #46665

Closed
thofma opened this issue Sep 7, 2022 · 2 comments
Closed

prod(::Vector{BigInt}) broken on 1.8 and master #46665

thofma opened this issue Sep 7, 2022 · 2 comments
Labels
bignums BigInt and BigFloat regression Regression in behavior compared to a previous version

Comments

@thofma
Copy link
Contributor

thofma commented Sep 7, 2022

It would be great if the following could be fixed for 1.8 (it is working fine on <= 1.7).

julia> prod(BigInt[0, 0])
ERROR: InexactError: check_top_bit(UInt64, -64)
Stacktrace:
  [1] throw_inexacterror(f::Symbol, #unused#::Type{UInt64}, val::Int64)
    @ Core ./boot.jl:614
  [2] check_top_bit
    @ ./boot.jl:628 [inlined]
  [3] toUInt64
    @ ./boot.jl:739 [inlined]
  [4] UInt64
    @ ./boot.jl:769 [inlined]
  [5] convert
    @ ./number.jl:7 [inlined]
  [6] cconvert
    @ ./essentials.jl:412 [inlined]
  [7] init2!
    @ ./gmp.jl:143 [inlined]
  [8] BigInt
    @ ./gmp.jl:56 [inlined]
  [9] prod(arr::Vector{BigInt})
    @ Base.GMP ./gmp.jl:673
 [10] top-level scope
    @ REPL[1]:1

The line

abs(x.size) * BITS_PER_LIMB - leading_zeros(unsafe_load(x.d))

looks wrong, in particular the unsafe_load(x.d).

CC: @rfourquet

@KristofferC KristofferC added regression Regression in behavior compared to a previous version bignums BigInt and BigFloat labels Sep 7, 2022
rfourquet added a commit that referenced this issue Sep 7, 2022
The way we were counting the number of bits was assigning a negative number
to `0`, which could lead to a negative total number of bits. Better to just
exit early in this case.
Also, the estimate was slightly off because we were counting the number of leading
zeros in the least significant limb, instead of the most significant.
rfourquet added a commit that referenced this issue Sep 7, 2022
The way we were counting the number of bits was assigning a negative number
to `0`, which could lead to a negative total number of bits. Better to just
exit early in this case.
Also, the estimate was slightly off because we were counting the number of leading
zeros in the least significant limb, instead of the most significant.
@rfourquet
Copy link
Member

Thanks for the report, there were indeed two bugs there; unsafe_load(x.d) in isolation was fine when x == 0, it's then guarranteed to return 0 but the whole line gave then a negative number, and otherwise it should have been unsafe_load(x.d, abs(x.size))

rfourquet added a commit that referenced this issue Sep 7, 2022
The way we were counting the number of bits was assigning a negative number
to `0`, which could lead to a negative total number of bits. Better to just
exit early in this case.
Also, the estimate was slightly off because we were counting the number of leading
zeros in the least significant limb, instead of the most significant.
rfourquet added a commit that referenced this issue Sep 7, 2022
The way we were counting the number of bits was assigning a negative number
to `0`, which could lead to a negative total number of bits. Better to just
exit early in this case.
Also, the estimate was slightly off because we were counting the number of leading
zeros in the least significant limb, instead of the most significant.
JeffBezanson pushed a commit that referenced this issue Sep 7, 2022
The way we were counting the number of bits was assigning a negative number
to `0`, which could lead to a negative total number of bits. Better to just
exit early in this case.
Also, the estimate was slightly off because we were counting the number of leading
zeros in the least significant limb, instead of the most significant.
@thofma
Copy link
Contributor Author

thofma commented Sep 8, 2022

Fixed in #46667, thanks @rfourquet. (Not sure why github did not automatically close the issue.)

@thofma thofma closed this as completed Sep 8, 2022
KristofferC pushed a commit that referenced this issue Sep 16, 2022
The way we were counting the number of bits was assigning a negative number
to `0`, which could lead to a negative total number of bits. Better to just
exit early in this case.
Also, the estimate was slightly off because we were counting the number of leading
zeros in the least significant limb, instead of the most significant.

(cherry picked from commit f7b4ebe)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bignums BigInt and BigFloat regression Regression in behavior compared to a previous version
Projects
None yet
Development

No branches or pull requests

3 participants