Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(types): replace 506 occurrences of 'Float64' with 'AbstractFloat' #1126

Closed
wants to merge 6 commits into from

Conversation

bionicles
Copy link

@bionicles bionicles commented Jun 7, 2020

This potentially breaks a bunch of stuff, but I'm keen to make a change like this because it would allow us to do probabilistic programming with neural networks... @andreasnoack and @matbesancon and others can you check this out? If we allow more flexibility with the Float type then we can more easily integrate with other libraries. I'm not familiar with this code base so please let's discuss what to look at to see if this is OK

Philosophically, it doesn't make sense to require extreme 64-bit precision numbers to talk about uncertainty, although I can understand why this could be required for numerical algorithms. Is there a safe way for us to convert between types of floats and preserve gradients etc?

If an algorithm did require 64bit numbers, and it converted 32bit inputs to 64bit, ran the procedure, and converted it back to 32bit, would this screw up Zygote and Chain Rules? (cc @MikeInnes and @oxinabox )

@bionicles
Copy link
Author

Hmmm...VSCode seems to have screwed this up. I'll try again with Atom

@oxinabox
Copy link
Contributor

oxinabox commented Jun 7, 2020

Why stop at AbstractFloat ?
Seems better to go all the way to Real and then can use Rational number and fixed point numbers etc.

But yes at very least AbstractFloat is a good idea, cutting down to Float32 for e.g. covarience matrixes will halve the amount of memory used

@oxinabox
Copy link
Contributor

oxinabox commented Jun 7, 2020

If an algorithm did require 64bit numbers, and it converted 32bit inputs to 64bit, ran the procedure, and converted it back to 32bit, would this screw up Zygote and Chain Rules? (cc @MikeInnes and @oxinabox )

Its fine.

test/utils.jl Outdated
@@ -41,7 +41,7 @@ N = GenericArray([1.0 0.0; 1.0 0.0])
n = 10
areal = randn(n,n)/2
aimg = randn(n,n)/2
@testset "For A containing $eltya" for eltya in (Float32, Float64, ComplexF32, ComplexF64, Int)
@testset "For A containing $eltya" for eltya in (Float32, AbstractFloat, ComplexF32, ComplexF64, Int)
Copy link
Member

@johnczito johnczito Jun 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes are much too blunt, at least in the test files. Here (^) for example, where the point is to loop through different concrete eltypes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably the tests should have been excluded from the regex.

@MikeInnes
Copy link
Contributor

This is probably better done by hand (and perhaps one or two features at a time to make it easier to review). From a quick scan most of the changes are false positives or incorrect, e.g. changing the type parameters of output in examples or dispatch constraints like Vector{AbstractFoat} which aren't doing what you want.

Most likely significant work is needed to be truly generic (eg propagating the eltype of inputs) and changing some instances of Float64 is a minority of it.

@oxinabox
Copy link
Contributor

oxinabox commented Jun 8, 2020

Ideally, SourceWalk.jl would be usable for this rough first pass rather than by hand, which would be a bit smarter, but still a manual pass would be needed after

@bionicles
Copy link
Author

bionicles commented Jun 22, 2020

Ok, I swapped in the old test directory, next we can look at examples and dispatch constraints

Anyone know a better approach to get this done efficiently? I'm really not familiar with this codebase and have only like 1 week of julia experience so it feels wrong for me to be tinkering here, but it would be super useful to hook distributions up to flux models

@bionicles
Copy link
Author

I didn't even know Real type was a thing, I'll look at that. Guidance appreciated

@bionicles
Copy link
Author

if we use Real, would that require us to convert Int to Float?

@bionicles
Copy link
Author

bionicles commented Jun 22, 2020

hmmm, it looks like multiple dispatch freaks out. I thought AbstractFloat would accept Float64 but apparently a method defined for AbstractArray{AbstractFloat} doesn't match Array{Float64} ... super weird... are we really expected to define everything for every data type? That would be a disgusting amount of busywork. Perhaps we should just delete the types alltogether

@bionicles
Copy link
Author

AbstractMatrix{AbstractFloat} -> AbstractMatrix

@kmsquire
Copy link
Contributor

it looks like multiple dispatch freaks out. I thought AbstractFloat would accept Float64 but apparently a method defined for AbstractArray{AbstractFloat} doesn't match Array{Float64} ... super weird... are we really expected to define everything for every data type? That would be a disgusting amount of busywork. Perhaps we should just delete the types alltogether

See https://docs.julialang.org/en/v1/manual/types/#Parametric-Composite-Types-1 for an explanation as to why this is true.

Instead of using AbstractArray{AbstractFloat}, you can use AbstractArray{<:AbstractFloat}.

@andreasnoack
Copy link
Member

As mentioned in #1126 (comment), this has to be done more carefully. E.g. some of these functions only work correctly for Float64 inputs so I'm closing this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants