Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check issimpleenoughtype before stripping off type parameters in tmerge #39980

Merged
merged 6 commits into from
Mar 15, 2021

Conversation

tkf
Copy link
Member

@tkf tkf commented Mar 10, 2021

This patch makes the following code inferable

julia> Base.return_types((Int,)) do x
           if x === 0
               Some(0.0)
           elseif x == 1
               Some(1)
           else
               Some(0x2)
           end
       end
1-element Vector{Any}:
 Union{Some{Float64}, Some{Int64}, Some{UInt8}}

while before we had

julia> Base.return_types((Int,)) do x
           if x === 0
               Some(0.0)
           elseif x == 1
               Some(1)
           else
               Some(0x2)
           end
       end
1-element Vector{Any}:
 Some

This was due to tmerge was treating wrapped types specially:

julia> Core.Compiler.tmerge(Union{Int8,Int16}, Int32)
Union{Int16, Int32, Int8}

julia> Core.Compiler.tmerge(Union{Some{Int8},Some{Int16}}, Int32)
Union{Int32, Some}

after this PR:

julia> Core.Compiler.tmerge(Union{Some{Int8},Some{Int16}}, Int32)
Union{Some{Int16}, Some{Int8}, Int32}

This is done simply by checking the complexity of the Union before removing the type constraints of the wrapper types.

I think reducing the difference between the behavior of union of parametric and non-parametric types is nice since it makes easier for Julia programmers to have consistent expectation for when union splitting happens.

My motivation behind this is that Transducers.jl creates "FSMs" at the type level to generate non-trivial reductions, using nested parameterized types. So, it'd be great if we can improve the type inference of small unions of wrapped types. This is particularly useful on GPU where I need to make sure to remove all dynamic dispatches.

@tkf
Copy link
Member Author

tkf commented Mar 10, 2021

With make test-compiler, I get

Test Failed at /home/arakaki/repos/watch/julia/test/compiler/codegen.jl:84
  Expression: tempty == true
   Evaluated: false == true

      From worker 2:    WARNING: replacing module M.
compiler/inference   (2) |    12.31 |   0.40 |  3.2 |    1415.06 |   367.31

Test Summary:         | Pass  Fail  Broken  Total
  Overall             | 1213     1       2   1216
    compiler/validation |   26                   26
    compiler/inline   |   52             1     53
    compiler/contextual |    6                    6
    compiler/irpasses |   25                   25
    compiler/ssair    |   38                   38
    compiler/codegen  |  136     1            137
    compiler/inference |  930             1    931
    FAILURE

To look into the problem, I tweaked the test a bit

diff --git a/test/compiler/codegen.jl b/test/compiler/codegen.jl
index 2b7b266751..3a7d7f0557 100644
--- a/test/compiler/codegen.jl
+++ b/test/compiler/codegen.jl
@@ -68,7 +68,8 @@ end
 # This function tests if a toplevel thunk is output if jl_dump_compiles is enabled.
 # The eval statement creates the toplevel thunk.
 function test_jl_dump_compiles_toplevel_thunks()
-    tfile = tempname()
+    # tfile = tempname()
+    tfile = "dump"
     io = open(tfile, "w")
     # Make sure to cause compilation of the eval function
     # before calling it below.
@@ -80,7 +81,8 @@ function test_jl_dump_compiles_toplevel_thunks()
     close(io)
     tstats = stat(tfile)
     tempty = tstats.size == 0
-    rm(tfile)
+    # rm(tfile)
     @test tempty == true
 end

and the tfile contains

$ cat test/dump
4123865 "Tuple{Base.var"#582#583"{Base.WeakKeyDict{Distributed.AbstractRemoteRef, Nothing}}, Distributed.Future}"

Looking at its method

julia> methods(Base.var"#582#583"{Nothing}.instance)
# 1 method for anonymous function "#582":
[1] (::Base.var"#582#583")(k) in Base at weakkeydict.jl:25

suggests that this is from a finalizer

t.finalizer = k -> t.dirty = true

So, I added a patch to disable GC in test_jl_dump_compiles_toplevel_thunks:

diff --git a/test/compiler/codegen.jl b/test/compiler/codegen.jl
index 2b7b266751..47f419d993 100644
--- a/test/compiler/codegen.jl
+++ b/test/compiler/codegen.jl
@@ -73,11 +73,13 @@ function test_jl_dump_compiles_toplevel_thunks()
     # Make sure to cause compilation of the eval function
     # before calling it below.
     Core.eval(Main, Any[:(nothing)][1])
+    GC.enable(false)  # avoid finalizers to be compiled
     topthunk = Meta.lower(Main, :(for i in 1:10; end))
     ccall(:jl_dump_compiles, Cvoid, (Ptr{Cvoid},), io.handle)
     Core.eval(Main, topthunk)
     ccall(:jl_dump_compiles, Cvoid, (Ptr{Cvoid},), C_NULL)
     close(io)
+    GC.enable(true)
     tstats = stat(tfile)
     tempty = tstats.size == 0
     rm(tfile)

This fixes the problem for me. (Edit: included in the PR: 72fb012)

Ref: #27086 where "Core.eval(Main, Any[:(nothing)][1]) hack" was introduced.

@aviatesk
Copy link
Member

aviatesk commented Mar 11, 2021

Yeah, I encountered that error too before, and it's better to be fixed, I think.
I also would like to point out that the error usually happened when there is a certain compile time overhead (,at least in my case there was), and since this PR certainly can potentially drastically impact the inference performance in terms of both accuracy and performance, I guess we need to check the nanosoldier benchmark ?

@tkf
Copy link
Member Author

tkf commented Mar 11, 2021

Actually, my suggestion is to include GC off/on as in my last comment: 72fb012. Is there a downside of turning off GC during this test? Other thing I tried was calling GC.gc() just before jl_dump_compiles. It also worked, but I thought GC.enable(false) might be more reliable.

I guess we need to check the nanosoldier benchmark ?

Yeah I agree we should run it 👍, if the core devs are OK with this patch.

Copy link
Member

@vtjnash vtjnash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd been mildly avoiding this, because of the potential for increased Union exploration time, but I think it does improve associativity, so we perhaps should do it

@tkf tkf force-pushed the always-use-simpleenoughtype branch from 299d907 to 5239b25 Compare March 11, 2021 17:14
@tkf
Copy link
Member Author

tkf commented Mar 11, 2021

@nanosoldier runbenchmarks(ALL, vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @christopher-dG

@JeffBezanson
Copy link
Member

So far this doesn't seem to change compile times much, so let's try it.

@JeffBezanson JeffBezanson merged commit 3635f04 into JuliaLang:master Mar 15, 2021
ElOceanografo pushed a commit to ElOceanografo/julia that referenced this pull request May 4, 2021
…ge (JuliaLang#39980)

* Check issimpleenoughtype before stripping off type parameters

* Avoid finalizer to be compiled during test_jl_dump_compiles_toplevel_thunks
antoine-levitt pushed a commit to antoine-levitt/julia that referenced this pull request May 9, 2021
…ge (JuliaLang#39980)

* Check issimpleenoughtype before stripping off type parameters

* Avoid finalizer to be compiled during test_jl_dump_compiles_toplevel_thunks
@KristofferC KristofferC added the backport 1.6 Change should be backported to release-1.6 label Jul 12, 2021
KristofferC pushed a commit that referenced this pull request Jul 12, 2021
…ge (#39980)

* Check issimpleenoughtype before stripping off type parameters

* Avoid finalizer to be compiled during test_jl_dump_compiles_toplevel_thunks

(cherry picked from commit 3635f04)
@KristofferC KristofferC mentioned this pull request Jul 12, 2021
75 tasks
@KristofferC
Copy link
Member

Removing from backports due to #42007 (comment).

@KristofferC KristofferC removed the backport 1.6 Change should be backported to release-1.6 label Aug 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants