Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference: improve cycle detection #21933

Merged
merged 4 commits into from
Jun 13, 2017

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented May 18, 2017

This helps ensure that inference converges, by ensuring that any recursion must be finite. Also fixed a number of functions in base that weren't finite under recursion and thus could trigger infinite loops in inference (especially if we tried to infer based just on method signatures). I'm mildly optimistic that this will permit increasing of tupletype_len :)

fixes #21244
fixes #17278
fixes #20714
fixes #19570

@ararslan ararslan added the compiler:inference Type inference label May 18, 2017
(next, _cshp(ndim + 1, (), tail(shape), tail(nshape))...)
end
@inline function _cshp(ndim::Int, dims, shape, nshape)
a = shape[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be 4 space indents

source[1] === source[2] && (source = svec(source[1]))
type_more_complex(t, compare, source) || return t
r = _limit_type_size(t, compare, source)
#@assert !isa(t, Type) || t <: r
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something you'd expect would be reenabled at some point?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps. currently it would just trigger the same false-bug detection as #21907

base/tuple.jl Outdated
ntuple(f, ::Type{Val{12}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12)))
ntuple(f, ::Type{Val{13}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12), f(13)))
ntuple(f, ::Type{Val{14}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12), f(13), f(14)))
ntuple(f, ::Type{Val{15}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12), f(13), f(14), f(15)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this not needed anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was just for adding the inline meta (#21446), which the generated function can already do

base/array.jl Outdated
@_inline_meta
_size((out..., size(A,M+1)), A)
end
size(a::Array{<:Any,N}) where {N} = (@_inline_meta; ntuple(M -> size(a, M), Val{N}))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a function like this one was not detected as convergent, will this likely wreak havoc among packages that used any kind of lispy recursion?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lispy recursion is fine. This is just more concise and clear. However, collector arguments aren't allowed. Hopefully I've shown enough examples in this PR of fixing method calls of that sort.

@KristofferC
Copy link
Member

@nanosoldier runbenchmarks(ALL, vs = ":master")

Copy link
Member

@martinholters martinholters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, not the time/patience to work through the details here, I admit.

But if I understand correctly, this would not yet solve the evil case of #21244 (comment), or does it? Would imposing an additional limit on the recursion depth, independent of type complexity, be feasible and reasonable?

end
return convert(typeof(parent(Ai)), Ai)
return Ai
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we eventually want inference to make this transform, but in generally it's quite difficult (and we are currently doing a pretty poor job of the generated code for this)

@@ -15,6 +15,7 @@ struct InferenceParams
inlining::Bool

# parameters limiting potentially-infinite types (configurable)
MAX_METHODS::Int
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@vtjnash vtjnash force-pushed the jn/infer-improve-cycle-detection branch from f5fc83c to 3b82cd6 Compare May 18, 2017 19:23
@vtjnash
Copy link
Member Author

vtjnash commented May 18, 2017

@martinholters yes, this fixes all of the issues brought up in #21244

Would imposing an additional limit on the recursion depth, independent of type complexity, be feasible and reasonable?

There's an existing limit on recursion depth for argument types (which also bounds the recursion depth), but the computation of it was inaccurate (leading to potentially unbounded recursion depth). This fixes the inaccuracy (demonstrated by the failures in the issue that you linked), and thus should ensure that inference will not run into infinite cases (and also generally tries to avoid very large cases).

@ararslan
Copy link
Member

@nanosoldier runbenchmarks("array" || "broadcast", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

_cshp((), (out..., 1), tail(shape), ())
_cshp(ndim::Int, ::Tuple{}, ::Tuple{}, ::Tuple{}) = ()
_cshp(ndim::Int, ::Tuple{}, ::Tuple{}, nshape) = nshape
_cshp(ndim::Int, dims, ::Tuple{}, ::Tuple{}) = (map(b -> 1, dims)...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that the splatting can be avoided here (being dims a tuple). So just map(b -> 1, dims) should do.

base/tuple.jl Outdated
# inferrable ntuple (enough for bootstrapping)
ntuple(f, ::Type{Val{0}}) = ()
ntuple(f::F, ::Type{Val{1}}) where {F} = (@_inline_meta; (f(1),))
ntuple(f::F, ::Type{Val{2}}) where {F} = (@_inline_meta; (f(1), f(2)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since these functions are called, why is the specialization-forcing needed?

@vtjnash
Copy link
Member Author

vtjnash commented May 19, 2017

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@martinholters
Copy link
Member

The AppVeyor failure looks real, but it's a bit strange that inference should be platform and OS specific. But unless I missed something, only sparse/higherorderfns and arrayops ran on the same worker as the failing numbers test, and running only these three tests (on one worker) works fine for me locally, so it doesn't look like a inter-test dependency. Ideas?

@vtjnash
Copy link
Member Author

vtjnash commented May 22, 2017

I'm not sure of the specifics, but that method may be annotated incorrectly (will be fixed by https://github.com/JuliaLang/julia/pull/21771/files#diff-e7776a0970cdf71c5c96557db2986f86L315).

@vtjnash vtjnash force-pushed the jn/infer-improve-cycle-detection branch from 3b08057 to d87fe81 Compare May 24, 2017 18:18
if isa(c, TypeVar)
if isa(t, TypeVar)
return type_more_complex(t.ub, c.ub, sources) ||
type_more_complex(t.lb, c.lb, sources);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unnecessary semicolons

for i = 1:length(tP)
type_more_complex(tP[i], cP[i], sources) && return true
end
else # allow deviation in the type name, if all of it's parameters came from the sources
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its

end
if isa(t, TypeVar)
if isa(c, TypeVar)
if t.ub === c.ub && t.lb === c.ub
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional that this is comparing a lower bound to an upper bound, when everywhere else upper is compared to upper and lower is compared to lower? If so, could really use a comment explaining why.
If not intentional and this is a typo, if it passes tests then that's indicative of missing coverage.

allsame = false
break
# TODO: FIXME: this heuristic depends on non-local state making type-inference unpredictable
# it also should to be integrated into the cycle resolution and iterated to convergence
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be integrated

@@ -1566,6 +1778,9 @@ function abstract_apply(aft::ANY, fargs::Vector{Any}, aargtypes::Vector{Any}, vt
return res
end

# TODO: this function is a very buggy and poor model of the return_type function
# since abstract_call_gf_by_type is a very inaccurate model of _method and of typeinf_type,
# while this assumes that it is an exact of both
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exact what of both?

@timholy
Copy link
Member

timholy commented Jun 9, 2017

Awesome. To help organize things I just tagged a couple of other old issues I'd filed (#17278, #16122, #21449) with an "inference" label.

@vtjnash
Copy link
Member Author

vtjnash commented Jun 9, 2017

@timholy thanks. It fixed at least two of those. I think StaticArrays may have already addressed one of them and I recall not being able to reproduce the ArrayIteration one.

@vtjnash
Copy link
Member Author

vtjnash commented Jun 9, 2017

I imagine that they should always be changed in lockstep, such that type_more_complex(t, ...) iff _limit_type_size(t, ...) != t?

Splitting this allows being more lenient about what is considered a more complex type, while being faster at converging when hitting the limit.

They often do get edited in lockstep, but it don't think they require much mutual agreement. They both must independently be monotonic over the type lattice, but I think that it requires remarkably little coordination between them to achieve this (indeed, they actually don't coordinate very much right now). Since type_more_complex runs first, it just needs to be monotonic and self-consistent. Then _limit_type_size can make or not make a change that type_more_complex disagrees with, as long as _limit_type_size itself converges to a fixed point when run iterative over the type it returns (aka is monotonic). This is perhaps also similar to how type_depth / limit_type_depth are paired.

For a simple concrete example, lets say we are operating only over the positive integers. And lets say more-complex considers complexity to be t <= c (e.g. monotonically decreasing). And lets say type-size considers complexity to be !(t > 10) (e.g. finite set). Clearly these disagree about what types are permitted to ensure convergence. However, since the domain of the disagreement is finite, convergence of inference in this case is nevertheless assured. I think that this shows that since both more-complex and type-size return false for a finite set of types (given c), the intersection of those sets must also be finite.

Thus while we could assert either of:

@assert !type_more_complex(t, r, source, TUPLE_COMPLEXITY_LIMIT_DEPTH)
@assert !type_more_complex(r, compare, source, TUPLE_COMPLEXITY_LIMIT_DEPTH)

I don't think either condition is actually necessary. But I'm not entirely convinced of this yet, let me know if you have better counter-examples.

detecting this code optimization (jump-threading) in the compiler is desirable,
but generally very difficult
now that our inference convergence algorithm is reasonably reliable,
this is a starting point for making the inference capabilities even better
@vtjnash vtjnash force-pushed the jn/infer-improve-cycle-detection branch from a223bc7 to fcfe875 Compare June 12, 2017 18:45
@vtjnash
Copy link
Member Author

vtjnash commented Jun 12, 2017

CI succeed, it just timed out while trying to handle the cache afterwards :/. Anyways, I think this is gtg.

@martinholters
Copy link
Member

There is also this in the Travis job that later timed out:

EOFError()EOFError()EOFError(CapturedException()CapturedException(CapturedException(EOFError(), EOFError(), EOFError(), EOFError(Any[)Any[CapturedException(Any[(EOFError((), (unsafe_readunsafe_readunsafe_read(::Any[(::(Base.AbstractIOBuffer{Array{UInt8,::1}}, ::Ptr{UInt8}, ::UInt32) at iobuffer.jl:105, 1), (unsafe_read(::TCPSocket, ::Ptr{UInt8}, ::UInt32) at stream.jl:752, 1), (unsafe_read(::TCPSocket, ::Base.RefValue{Base.NTuple{AbstractIOBuffer{Array{UInt8,1}}, ::Ptr{UInt8}, ::UInt324,Int32}}, ::Int32) at io.jl:362, 1), () at iobuffer.jl:105, 1), (unsafe_read(::TCPSocket, ::Ptr{UInt8}, ::UInt32) at stream.jl:752, 1), (unsafe_read(::TCPSocket, ::Base.RefValue{NTuple{(Base.4,Int32}}, ::Int32) at io.jl:362, 1), (read at io.jl:364 [inlined], 1), (deserialize_hdr_raw at messages.jl:170 [inlined], 1), (message_handler_loop(::TCPSocket, ::TCPSocket, ::Bool) at process_messages.jl:157, 1), (process_tcp_streams(::TCPSocket, ::TCPSocket, ::Bool) at process_messages.jl:118, 1), (AbstractIOBuffer{Array{UInt8,1}}, ::Ptr{UInt8}, ::UInt32) at iobuffer.jl:105, 1), (unsafe_read(::TCPSocket, ::Ptr{UInt8}, ::UInt32) at stream.jl:752, 1), (unsafe_read(::TCPSocket, ::Base.RefValue{NTuple{read at io.jl:364 [inlined], 1), (deserialize_hdr_raw at messages.jl:170 [inlined], 1), (message_handler_loop(::TCPSocket, ::TCPSocket, ::Bool) at process_messages.jl:157, 1), (process_tcp_streams(::TCPSocket, ::TCPSocket, ::Bool) at process_messages.jl:118, 1), (4,Int32(::Base.Distributed.##99#100{TCPSocket,TCPSocket,Bool})() at event.jl:73, 1)])

Process(32) - Unknown remote, closing connection.

(::Base.Distributed.##99#100{TCPSocket,TCPSocket,Bool})() at event.jl:73, 1)])

Process(34) - Unknown remote, closing connection.

unsafe_read(::Base.AbstractIOBufferEOFError()CapturedException(EOFError(), Any[(unsafe_read(::Base.AbstractIOBuffer{Array{UInt8,1}}, ::Ptr{UInt8}, ::UInt32) at iobuffer.jl:105, 1), (unsafe_read(::TCPSocket, ::Ptr{UInt8}, ::UInt32) at stream.jl:752, 1), (unsafe_read(::TCPSocket, ::Base.RefValue{NTuple{4,Int32}}, ::Int32) at io.jl:362, 1), (read at io.jl:364 [inlined], 1), (deserialize_hdr_raw at messages.jl:170 [inlined], 1), (message_handler_loop(::TCPSocket, ::TCPSocket, ::Bool) at process_messages.jl:157, 1), (process_tcp_streams(::TCPSocket, ::TCPSocket, ::Bool) at process_messages.jl:118, 1), ((::Base.Distributed.##99#100{TCPSocket,TCPSocket,Bool})() at event.jl:73, 1)])

Process(34) - Unknown remote, closing connection.

Likely unrelated, but I've restarted the job, old log back up here.

@martinholters
Copy link
Member

Hm, that error message is logged again, but it doesn't make the tests fail, and it appears in other PRs, too, so shouldn't stop this, but if anyone can shed some light on what's going on, that would be appreciated.

Meanwhile, I'm merging this.

@martinholters martinholters merged commit d3db312 into master Jun 13, 2017
@martinholters martinholters deleted the jn/infer-improve-cycle-detection branch June 13, 2017 07:52
@amitmurthy
Copy link
Contributor

Seems like it is artifact of the topology tests run as part of the distributed test. I'll take a look, doesn't seem to be an issue on OSX though.

@amitmurthy
Copy link
Contributor

I am not seeing it with the nightly build on Anubis. Don't have ready access to any other Linux machine....

cPi = unwrap_unionall(cPi)
if isa(tPi, DataType) && isa(cPi, DataType) &&
!tPi.abstract && !cPi.abstract &&
sym_isless(cPi.name.name, tPi.name.name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it could lead to really unpredictable effects.

I also think type_more_complex and is_derived_type could use a whole bunch of unit tests, mostly as examples illustrating what they do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet