Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

randn! cannot fill in float32 arrays. Need a more general randn(T, ...) implementation. #9836

Closed
the-moliver opened this issue Jan 19, 2015 · 36 comments
Labels
randomness Random number generation and the Random stdlib

Comments

@the-moliver
Copy link

a = float32(randn(10,10));
rand!(a) works fine
randn!(a) fails

@ViralBShah ViralBShah added the randomness Random number generation and the Random stdlib label Jan 19, 2015
@ViralBShah
Copy link
Member

Cc: @rfourquet

@rfourquet
Copy link
Member

At least the doc for randn! is coherent! I'm assigning myself on this, but do we want randn! work on any FloatingPoint array, or only on Float{16,32,64} arrays? (e.g. rand!(a) doesn't work for a BigFloat array a).

@rfourquet rfourquet self-assigned this Jan 20, 2015
@ivarne
Copy link
Member

ivarne commented Jan 20, 2015

I would guess randn!{T}(a::AbstractArray{T}) should work when randn(::Type{T}) works. I think the same argument works for rand!, so rand!(a::Array{BigFloat}) should work (as long as rand(::Type{BigFloat}) has a reasonable result).

@rfourquet
Copy link
Member

rand(::Type{BigFloat}) is not currently implemented. But the thing is that there are no methods with signature randn(::Type{T}), ie. randn only produces Float64 values. But it seems fair to enable (only for convenience) randn!{T<:Union(Float16,FLoat32)}(a::AbstractArray{T}) by means of convert. I'm less sure with BigFloat as the destination type has more precision by default than the source type (Float64). I think I'll do a PR with only Float16 and Float32 methods, and let other types be discussed when the need arise.

@ivarne
Copy link
Member

ivarne commented Jan 21, 2015

The missing rand(::Type{BigFloat}) is a separate issue (probably not high priority other than for completeness). I think a MethodError is better than a simple big(rand(Float64)) implementation until we get a proper implementation that makes all the bits random.

randn doesn't make sense for many types, but I can't see why it shouldn't mirror the rand API as much as is applicable. Does the conversion approach suffer from the rounding problem that we got bitten by with rand(::Type{Float16}) = convert(Float16, rand())?

@the-moliver
Copy link
Author

Just wanted to check in on this and see if there's and ETA for this. I'd really like to use the official version in some code. Thanks for all the hard work!

@rfourquet
Copy link
Member

I'm sorry i wanted to tackle this but i have been out of office for the past few weeks and my laptop got broken. If no one beats me at this, i think i will be able to fix this within 2 weeks.

@ViralBShah ViralBShah changed the title randn! cannot fill in float32 arrays randn! cannot fill in float32 arrays. Need a more general randn(T, ...) implementation. Feb 21, 2015
@ViralBShah
Copy link
Member

What would be the right way to implement the randn(Float32) and randn(Float16) cases. Should we continue to do the computations in Float64 and just return converted values? That seems like a safer approach than doing all the computations in lower precision.

@ivarne
Copy link
Member

ivarne commented Feb 21, 2015

Converting might give a result with a slightly wrong distribution.

rand(Type {Float16}) = Float16(rand(Float64)) is wrong, because it might generate exact 1.0, which is outside the desired range.

@ViralBShah
Copy link
Member

Cc: @dmbates @simonbyrne

@simonbyrne
Copy link
Contributor

We should be able to implement Float32 versions of randn: indeed looking at the Marsaglia & Tsang paper, it seems like it was originally written for 32-bit floats.

I don't really see the point of implementing Float16. I would also lean toward not implementing BigFloat either, as it would be very tricky to get right, and forcing people to promote from Float64 might make them more aware of the limits of their random numbers.

@ViralBShah
Copy link
Member

The current algorithm should work for Float32 - only the tables need to be fixed. I do now remember that the original paper was indeed for Float32.

@the-moliver
Copy link
Author

Just checking on this. And on a related issue, are randg and randchi2 expected to make a return?

@ViralBShah
Copy link
Member

@the-moliver For equivalents of randg and randchi2, please use the Distributions package.

http://distributionsjl.readthedocs.org/en/latest/univariate.html#list-of-distributions

@ViralBShah
Copy link
Member

@andreasnoack IIRC, you are the author of the current randn Julia code. Would it be good enough if we just create Float32 tables for the ziggurat?

@ViralBShah
Copy link
Member

@andreasnoack Pinging again. Is it safe to implement this one by converting the existing tables to 32-bit?

@andreasnoack
Copy link
Member

Sorry, I missed the last ping. It's been a while since I worked on the ziggurat tables, but I think we'd have to recompute them. The code to generate the Float64 tables is kept as a test in test/random.jl and can probably be modified to work for Float32. Actually, I think Massaglia's original code was for single precision.

@ViralBShah
Copy link
Member

Yes Marsaglia's original code was for single precision.

@stevengj
Copy link
Member

I just ran into this one as well — I wanted randn(Complex128, dims...). That, at least, would be trivial to implement in terms of the Float64 randn code: just rand(::Type{Complex128}) = Complex(randn(),randn()) * 0.707106781186547524400844362104849

@ViralBShah
Copy link
Member

@andreasnoack Would you be able to generate the tables for single precision? Or can we just round the existing tables?

@rfourquet
Copy link
Member

From the discussion so far, I can not tell if enabling a native implementation for Float32, by re-generating the tables (or rounding the existing ones), is preferable (accuracy, efficiency) over simply computing in Float64 precision and converting to Float32 (randn(::Type{Float32}) = Float32(randn())). Can someone comment authoritatively on @ivarne's remark :

Converting might give a result with a slightly wrong distribution

What about Float16?

@KrisDM
Copy link

KrisDM commented Jan 26, 2016

Is there any progress on this issue? Even if it's only by truncating the Float64 values to Float32. I checked the machine code that llvm generates for double-to-single conversions and it's a single operation. It won't massively increase the processing time for generating Float32 values, and the downstream gains are considerable (e.g., when calculating the exponential of normally-distributed values, I'm already gaining more by doing that in Float32 than the conversion costs).

I understand a proper single precision implementation could in principle be faster, but in absence of someone having the time to recalculate the tables for single precision, this is a low-cost solution to have randn support the same types as rand.

Someone mentioned above that rounding errors gave problems for rand(Float16), because rounded values could fall outside the support, but surely this can't be the case for the normal distribution as its support is infinite?

I just submitted a request in the Distributions package to implement support for Float32 in all distributions, which is relatively easy there but would be even easier if randn had the same interface as rand.

@simonbyrne
Copy link
Contributor

I don't see how converting could give the wrong distribution for randn

@ivarne
Copy link
Member

ivarne commented Jan 27, 2016

Great! I just raised the issue because I was fascinated with the rand conversion bug.

@ViralBShah
Copy link
Member

@rfourquet Would be good to have a converting version for now and an open issue about a native Float32 implementation. I think it only needs new tables, and should be easy to get.

@ViralBShah
Copy link
Member

Perhaps given the various constants and bit manipulations, we may have to replicate the code for the float32 version.

rfourquet added a commit to rfourquet/julia that referenced this issue Jan 27, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue Jan 27, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue Jan 29, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue Jan 29, 2016
@rfourquet
Copy link
Member

@stevengj I started implementing the randn(Complex{...}) at https://github.com/rfourquet/julia/tree/rf/randn-complex, feel free to take over this branch, or to tell me how (if it makes sense) to define similarly randexp for complex numbers.

@stevengj
Copy link
Member

@rfourquet,

$randfun{F<:$Floats}(rng::AbstractRNG, ::Type{Complex{F}}) =
                  Complex(randn(rng, F), randn(rng, F)) * 0.707106781186547524400844362104849

looks good to me, except that it will promote the result to Complex{Float64}.

Now that I look at it, it's not clear that there is an unambiguous/accepted extension of randexp to complex numbers, so it might be better to omit it.

@simonbyrne
Copy link
Contributor

I'm not a huge fan of randn(Complex{T}): to me, it's fundamentally a different distribution. Could we call it randcn, or something similar?

To put it another way, I would expect randn!(X) to give the same result as X[:] = randn(size(X)...), which it won't if X is a complex array.

@ViralBShah
Copy link
Member

I would be ok with a different name. Not sure randcn quite works for me. Some alternatives that perhaps have a randn prefix? Or this different enough that it has nothing to do with randn?

@andreasnoack
Copy link
Member

Then we might want to consider if complex Gaussian variates are really necessary to support in base. They are rather rare (e.g. I don't think Matlab or Numpy have them) and also very simple to create from real Gaussian variates.

@stevengj
Copy link
Member

@simonbyrne, I disagree that "it's a fundamentally different distribution". The definition is exactly the same as for reals, just taken over a different number field. Leaving out the complex case seems like an odd omission, considering that supporting it is trivial, and I've often been annoyed by it.

@simonbyrne
Copy link
Contributor

I would contend that the fact that it's a different number field makes it a different distribution.

That said, I can certainly see the appeal/convenience of having something like this in Base, so I withdraw my previous objection (pending documentation), and would be content to keep the pedantry for Distributions.jl

rfourquet added a commit to rfourquet/julia that referenced this issue May 20, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue May 20, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue May 20, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue May 22, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue May 22, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue Jun 8, 2016
rfourquet added a commit to rfourquet/julia that referenced this issue Jun 9, 2016
@fiatflux
Copy link

fiatflux commented Jul 3, 2016

Perhaps lending credence to their intuitiveness, I independently implemented the same randn(::Type{Complex128}) and randn!(A::Array{Complex128}) before poking around here to see if I should start a pull request -- thanks @stevengj and @rfourquet for starting that work!

Regarding concern about overloading randn for a different field, even though @simonbyrne seems satisfied already: rand is already overloaded as the uniform distribution over different fields (and different measure spaces therein). This seems like a syntactically parallel and equally natural extension.

@stevengj
Copy link
Member

stevengj commented Jul 3, 2016

@fiatflux, a pull request would be welcome.

rfourquet added a commit to rfourquet/julia that referenced this issue Jul 24, 2016
mfasi pushed a commit to mfasi/julia that referenced this issue Sep 5, 2016
…ng#14811)

* implement randn!(::Array{Float32}) etc. (fix JuliaLang#9836)

* re-enable tests for Char in rand API (disabled in 84349f2)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
randomness Random number generation and the Random stdlib
Projects
None yet
Development

No branches or pull requests

9 participants