Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reverts #1557 #1561

Merged
merged 4 commits into from
Apr 1, 2021
Merged

Reverts #1557 #1561

merged 4 commits into from
Apr 1, 2021

Conversation

DhairyaLGandhi
Copy link
Member

Reverts #1557

@DhairyaLGandhi
Copy link
Member Author

Closes #1560 #1559

cc @darsnack @CarloLucibello

@DhairyaLGandhi
Copy link
Member Author

bors r+

bors bot added a commit that referenced this pull request Apr 1, 2021
1561: Reverts #1557 r=DhairyaLGandhi a=DhairyaLGandhi

Reverts #1557 

Co-authored-by: Dhairya Gandhi <dhairya@juliacomputing.com>
Copy link
Member

@darsnack darsnack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Believe this is correct other than the one test

@@ -40,17 +40,16 @@ import Flux: activations
@test Dense(rand(Float16, 100,10), true).bias isa Vector{Float16} # creates matching type
@test_skip Dense(rand(Float16, 100,10), rand(100)).bias isa Vector{Float16} # converts to match

@test_skip Dense(3,4; init=Base.randn, bias=true).bias isa Vector{Float64}
@test Dense(3,4; init=Base.randn, bias=true).bias isa Vector{Float64}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a @test_skip right? The correct eltype passing is an accident here (cause bias = true uses zeros which happens to be Float64).

Copy link
Member Author

@DhairyaLGandhi DhairyaLGandhi Apr 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. This was in #1558, and it uses Flux.zeros (not Base.zeros) which produces Float32 by default.

julia> Flux.zeros(4)
4-element Array{Float32,1}:
 0.0
 0.0
 0.0
 0.0

I agree that the test isn't explicit about its intent. This should be rewritten.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, okay we'll address it separately. Thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reverting here is good, we should produce float64 bias when the weights produced are float64

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should produce float64 bias when the weights produced are float64

No
We should do as we are told by the user. Flux is meant to be hackable. While it is valid to have defaults, it is required to have overrides at different levels.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm referring to the specific case tested by the line we are commenting here. When doing Dense(3,4; init=Base.randn, bias=true) the bias should be float64

@darsnack darsnack mentioned this pull request Apr 1, 2021
4 tasks
@DhairyaLGandhi DhairyaLGandhi requested a review from darsnack April 1, 2021 00:51
@DhairyaLGandhi
Copy link
Member Author

bors r+

@darsnack
Copy link
Member

darsnack commented Apr 1, 2021

Okay so do I need to hit approve or is bors okay? (sorry evening here so I'm on mobile)

@bors
Copy link
Contributor

bors bot commented Apr 1, 2021

Build succeeded:

@bors bors bot merged commit 8a5b977 into master Apr 1, 2021
@DhairyaLGandhi DhairyaLGandhi deleted the dg/rev2 branch April 1, 2021 01:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants