Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rearrange inlining of ensureroom and speed it up #27874

Merged
merged 3 commits into from
Oct 30, 2018
Merged

Conversation

JeffBezanson
Copy link
Member

If you look at the IR for write(::IOBuffer, ::UInt8), it is absurdly large since both ensureroom and compact get inlined, plus there are spurious error checks for cases that don't happen. This separates the rare part of ensureroom into its own noinline function, and tightens up some of the rest of the code. This benchmark:

function buf()
    b = IOBuffer()
    for i = 1:100000
        write(b, '\u2200')
    end
    String(take!(b))
end

gets almost 2x faster plus code size is significantly reduced.

@JeffBezanson JeffBezanson added performance Must go faster io Involving the I/O subsystem: libuv, read, write, etc. strings "Strings!" labels Jun 29, 2018
base/iobuffer.jl Outdated

@inline ensureroom(io::GenericIOBuffer, nshort::Int) = ensureroom(io, UInt(nshort))
@inline function ensureroom(io::GenericIOBuffer, nshort::UInt)
if !io.writable || !io.seekable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this !io.writable || (!io.seekable && io.ptr > 1)? One extra comparison and branch here shouldn't be much extra cost, but that would restore the expected fastpath for !seekable.

@KristofferC
Copy link
Member

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@vtjnash
Copy link
Member

vtjnash commented Jun 30, 2018

String join is probably a test of this, and shows a large regression, but within the variance sometimes observed. So may need to check that locally

@ararslan
Copy link
Member

ararslan commented Jul 2, 2018

@nanosoldier runbenchmarks("string" && "join", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - no performance regressions were detected. A full report can be found here. cc @ararslan

@JeffBezanson
Copy link
Member Author

Hmm, this now seems to have an unusually large effect on sysimg build time. Will need to investigate; it might turn up something generally interesting.

@KristofferC
Copy link
Member

KristofferC commented Aug 16, 2018

Indeed, including Base goes from 27 to 74 seconds with this...

@JeffBezanson JeffBezanson added the DO NOT MERGE Do not merge this PR! label Sep 7, 2018
@vtjnash
Copy link
Member

vtjnash commented Oct 22, 2018

rebase this and see if it's better now?

@KristofferC
Copy link
Member

Base  ─────────── 78.449089 seconds

@KristofferC
Copy link
Member

FWIW, I found that almost all slowdown is from including show.jl which on this branch takes 47 seconds compared to 2.7 seconds on master. The rest of the files seems have fairly similar timing.

@KristofferC
Copy link
Member

In fact, evaluating the following method definition seems to account for the full slowdown:

julia/base/show.jl

Lines 1567 to 1586 in c162219

function show(io::IO, src::CodeInfo)
# Fix slot names and types in function body
print(io, "CodeInfo(")
lambda_io::IOContext = io
if src.slotnames !== nothing
lambda_io = IOContext(lambda_io, :SOURCE_SLOTNAMES => sourceinfo_slotnames(src))
end
@assert src.codelocs !== nothing
if isempty(src.linetable) || src.linetable[1] isa LineInfoNode
println(io)
# TODO: static parameter values?
IRShow.show_ir(lambda_io, src)
else
# this is a CodeInfo that has not been used as a method yet, so its locations are still LineNumberNodes
body = Expr(:block)
body.args = src.code
show(lambda_io, body)
end
print(io, ")")
end

Hopefully that is enough to figure out what is going on.

@KristofferC
Copy link
Member

KristofferC commented Oct 22, 2018

In fact, replacing that function definition with

g() = @assert 1==1

is enough to trigger the slowdown so it has something to do with the string stuff in the @assert macro.

Note that

g() = @assert true

does not lead to a slowdown.

@vtjnash
Copy link
Member

vtjnash commented Oct 22, 2018

defining show(io::IO, src::CodeInfo) = @assert 1 == 1 is effectively equivalent to calling string(:(1 == 1))

@KristofferC
Copy link
Member

KristofferC commented Oct 22, 2018

Liked @vtjnash said, just putting a string(:(1 == 1)) at top level is enough to trigger it.

@KristofferC
Copy link
Member

KristofferC commented Oct 22, 2018

We have

string(xs...) = print_to_string(xs...)

and

julia> @which string(:(1==1))
string(xs...) in Base at strings/io.jl:155

but evaluating

print_to_string(:(1==1))

does not give a slowdown while

string(:(1==1))

does. I think I am reaching my limit here :)

@vtjnash
Copy link
Member

vtjnash commented Oct 22, 2018

Quite strange. You could try enabling JL_DEBUG_METHOD_INVALIDATION (in gf.c) and/or set --trace-compile= during compilation and see if there are substantive differences.

@KristofferC
Copy link
Member

KristofferC commented Oct 23, 2018

Commenting out the compact call

https://github.com/JuliaLang/julia/pull/27874/files#diff-1cac367ddf750f99a052d3119eb8381fR288

makes precompile(Tuple{typeof(Base.string), Expr})

go from 56 seconds to 0.4 seconds.

Also inserting a

precompile(Tuple{typeof(Base.compact), Base.GenericIOBuffer{Array{UInt8,1}}})

before the call to

precompile(Tuple{typeof(Base.string), Expr})

makes it fast.

Forcing interrupt (for the long case) gives

sig_match_simple at /Users/kristoffer/julia/src/typemap.c:175
jl_typemap_entry_assoc_exact at /Users/kristoffer/julia/src/typemap.c:781
jl_typemap_assoc_exact at /Users/kristoffer/julia/src/./julia_internal.h:879 [inlined]
jl_lookup_generic_ at /Users/kristoffer/julia/src/gf.c:2148
jl_apply_generic at /Users/kristoffer/julia/src/gf.c:2194
pure_eval_call at ./compiler/abstractinterpretation.jl:496
abstract_call at ./compiler/abstractinterpretation.jl:729
abstract_eval_call at ./compiler/abstractinterpretation.jl:765
abstract_eval at ./compiler/abstractinterpretation.jl:850
typeinf_local at ./compiler/abstractinterpretation.jl:1093
typeinf_nocycle at ./compiler/abstractinterpretation.jl:1149
typeinf at ./compiler/typeinfer.jl:15
typeinf_edge at ./compiler/typeinfer.jl:492
abstract_call_method at ./compiler/abstractinterpretation.jl:324
abstract_call_gf_by_type at ./compiler/abstractinterpretation.jl:81
abstract_call at ./compiler/abstractinterpretation.jl:736
abstract_eval_call at ./compiler/abstractinterpretation.jl:765
abstract_eval at ./compiler/abstractinterpretation.jl:850
typeinf_local at ./compiler/abstractinterpretation.jl:1093
typeinf_nocycle at ./compiler/abstractinterpretation.jl:1149
typeinf at ./compiler/typeinfer.jl:15
...

over and over.

I tried to do some profiling but didn't get anything reasonable

screenshot 2018-10-23 at 10 26 32

@KristofferC
Copy link
Member

KristofferC commented Oct 23, 2018

Commenting out the

    copyto!(io.data, 1, io.data, ptr, bytes_to_move)

in compact makes it fast.

@KristofferC
Copy link
Member

I pushed a commit that works around the inference problem. Good ol' outlining to the rescue...

@nanosoldier runbenchmarks(ALL, vs = ":master")

@ararslan
Copy link
Member

Nanosoldier needs code quoting around the command for whatever reason. @nanosoldier runbenchmarks(ALL, vs=":master")

@KristofferC
Copy link
Member

KristofferC commented Oct 23, 2018

@vtjnash noted that this PR didn't follow the styleguide proposed in #29727 and therefore gave the compiler a bit of extra headache (fixing it didn't remove the need of the first workaround, however):

julia> @code_warntype Base.ensureroom_slowpath(IOBuffer(), UInt64(5))
Body::Union{Nothing, Int64, GenericIOBuffer{Array{UInt8,1}}}

Fixed in the last commit.

@KristofferC
Copy link
Member

Im curious if that changed the benchmarks anything

@nanosoldier runbenchmarks(ALL, vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@KristofferC KristofferC added the potential benchmark Could make a good benchmark in BaseBenchmarks label Oct 24, 2018
@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@KristofferC
Copy link
Member

On 1.0.1 I don't see compact getting inlined (but it does seem to get inlined on master) but I can't reproduce the performance improvements in the OP.

master:

julia> @btime buf();
  3.334 ms (20 allocations: 514.55 KiB)

1.0.1:

julia> @btime buf();
  3.311 ms (19 allocations: 514.58 KiB)

PR:

julia> @btime buf();
  3.313 ms (19 allocations: 514.53 KiB)

But less code bloat is good anyway?

@KristofferC
Copy link
Member

KristofferC commented Oct 29, 2018

What should be done here now that #29795 is merged? If the @noinline is no longer needed, should I just drop that and then we merge here? Or just merge as is? @vtjnash

Copy link
Member

@vtjnash vtjnash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. do we need a nanosoldier run or any such other test?

@KristofferC
Copy link
Member

There hasn't been any change here since the last nanosoldier run so presumably not.

@JeffBezanson JeffBezanson removed the DO NOT MERGE Do not merge this PR! label Oct 29, 2018
@KristofferC KristofferC merged commit e2f7b2f into master Oct 30, 2018
@KristofferC KristofferC deleted the jb/iobufinl branch October 30, 2018 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
io Involving the I/O subsystem: libuv, read, write, etc. performance Must go faster potential benchmark Could make a good benchmark in BaseBenchmarks strings "Strings!"
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants