-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference: improve cycle detection #21933
Conversation
base/abstractarray.jl
Outdated
(next, _cshp(ndim + 1, (), tail(shape), tail(nshape))...) | ||
end | ||
@inline function _cshp(ndim::Int, dims, shape, nshape) | ||
a = shape[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be 4 space indents
base/inference.jl
Outdated
source[1] === source[2] && (source = svec(source[1])) | ||
type_more_complex(t, compare, source) || return t | ||
r = _limit_type_size(t, compare, source) | ||
#@assert !isa(t, Type) || t <: r |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this something you'd expect would be reenabled at some point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps. currently it would just trigger the same false-bug detection as #21907
base/tuple.jl
Outdated
ntuple(f, ::Type{Val{12}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12))) | ||
ntuple(f, ::Type{Val{13}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12), f(13))) | ||
ntuple(f, ::Type{Val{14}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12), f(13), f(14))) | ||
ntuple(f, ::Type{Val{15}}) = (@_inline_meta; (f(1), f(2), f(3), f(4), f(5), f(6), f(7), f(8), f(9), f(10), f(11), f(12), f(13), f(14), f(15))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this not needed anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was just for adding the inline meta (#21446), which the generated function can already do
base/array.jl
Outdated
@_inline_meta | ||
_size((out..., size(A,M+1)), A) | ||
end | ||
size(a::Array{<:Any,N}) where {N} = (@_inline_meta; ntuple(M -> size(a, M), Val{N})) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a function like this one was not detected as convergent, will this likely wreak havoc among packages that used any kind of lispy recursion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lispy recursion is fine. This is just more concise and clear. However, collector arguments aren't allowed. Hopefully I've shown enough examples in this PR of fixing method calls of that sort.
@nanosoldier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, not the time/patience to work through the details here, I admit.
But if I understand correctly, this would not yet solve the evil case of #21244 (comment), or does it? Would imposing an additional limit on the recursion depth, independent of type complexity, be feasible and reasonable?
end | ||
return convert(typeof(parent(Ai)), Ai) | ||
return Ai |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we eventually want inference to make this transform, but in generally it's quite difficult (and we are currently doing a pretty poor job of the generated code for this)
base/inference.jl
Outdated
@@ -15,6 +15,7 @@ struct InferenceParams | |||
inlining::Bool | |||
|
|||
# parameters limiting potentially-infinite types (configurable) | |||
MAX_METHODS::Int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
f5fc83c
to
3b82cd6
Compare
@martinholters yes, this fixes all of the issues brought up in #21244
There's an existing limit on recursion depth for argument types (which also bounds the recursion depth), but the computation of it was inaccurate (leading to potentially unbounded recursion depth). This fixes the inaccuracy (demonstrated by the failures in the issue that you linked), and thus should ensure that inference will not run into infinite cases (and also generally tries to avoid very large cases). |
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
base/abstractarray.jl
Outdated
_cshp((), (out..., 1), tail(shape), ()) | ||
_cshp(ndim::Int, ::Tuple{}, ::Tuple{}, ::Tuple{}) = () | ||
_cshp(ndim::Int, ::Tuple{}, ::Tuple{}, nshape) = nshape | ||
_cshp(ndim::Int, dims, ::Tuple{}, ::Tuple{}) = (map(b -> 1, dims)...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that the splatting can be avoided here (being dims
a tuple). So just map(b -> 1, dims)
should do.
base/tuple.jl
Outdated
# inferrable ntuple (enough for bootstrapping) | ||
ntuple(f, ::Type{Val{0}}) = () | ||
ntuple(f::F, ::Type{Val{1}}) where {F} = (@_inline_meta; (f(1),)) | ||
ntuple(f::F, ::Type{Val{2}}) where {F} = (@_inline_meta; (f(1), f(2))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since these functions are called, why is the specialization-forcing needed?
@nanosoldier |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
The AppVeyor failure looks real, but it's a bit strange that inference should be platform and OS specific. But unless I missed something, only |
I'm not sure of the specifics, but that method may be annotated incorrectly (will be fixed by https://github.com/JuliaLang/julia/pull/21771/files#diff-e7776a0970cdf71c5c96557db2986f86L315). |
3b08057
to
d87fe81
Compare
base/inference.jl
Outdated
if isa(c, TypeVar) | ||
if isa(t, TypeVar) | ||
return type_more_complex(t.ub, c.ub, sources) || | ||
type_more_complex(t.lb, c.lb, sources); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unnecessary semicolons
base/inference.jl
Outdated
for i = 1:length(tP) | ||
type_more_complex(tP[i], cP[i], sources) && return true | ||
end | ||
else # allow deviation in the type name, if all of it's parameters came from the sources |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its
base/inference.jl
Outdated
end | ||
if isa(t, TypeVar) | ||
if isa(c, TypeVar) | ||
if t.ub === c.ub && t.lb === c.ub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it intentional that this is comparing a lower bound to an upper bound, when everywhere else upper is compared to upper and lower is compared to lower? If so, could really use a comment explaining why.
If not intentional and this is a typo, if it passes tests then that's indicative of missing coverage.
base/inference.jl
Outdated
allsame = false | ||
break | ||
# TODO: FIXME: this heuristic depends on non-local state making type-inference unpredictable | ||
# it also should to be integrated into the cycle resolution and iterated to convergence |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be integrated
base/inference.jl
Outdated
@@ -1566,6 +1778,9 @@ function abstract_apply(aft::ANY, fargs::Vector{Any}, aargtypes::Vector{Any}, vt | |||
return res | |||
end | |||
|
|||
# TODO: this function is a very buggy and poor model of the return_type function | |||
# since abstract_call_gf_by_type is a very inaccurate model of _method and of typeinf_type, | |||
# while this assumes that it is an exact of both |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exact what of both?
d87fe81
to
fe180b1
Compare
fe180b1
to
d58fe10
Compare
@timholy thanks. It fixed at least two of those. I think StaticArrays may have already addressed one of them and I recall not being able to reproduce the ArrayIteration one. |
Splitting this allows being more lenient about what is considered a more complex type, while being faster at converging when hitting the limit. They often do get edited in lockstep, but it don't think they require much mutual agreement. They both must independently be monotonic over the type lattice, but I think that it requires remarkably little coordination between them to achieve this (indeed, they actually don't coordinate very much right now). Since For a simple concrete example, lets say we are operating only over the positive integers. And lets say Thus while we could assert either of:
I don't think either condition is actually necessary. But I'm not entirely convinced of this yet, let me know if you have better counter-examples. |
b031f9e
to
a223bc7
Compare
detecting this code optimization (jump-threading) in the compiler is desirable, but generally very difficult
now that our inference convergence algorithm is reasonably reliable, this is a starting point for making the inference capabilities even better
a223bc7
to
fcfe875
Compare
CI succeed, it just timed out while trying to handle the cache afterwards :/. Anyways, I think this is gtg. |
There is also this in the Travis job that later timed out:
Likely unrelated, but I've restarted the job, old log back up here. |
Hm, that error message is logged again, but it doesn't make the tests fail, and it appears in other PRs, too, so shouldn't stop this, but if anyone can shed some light on what's going on, that would be appreciated. Meanwhile, I'm merging this. |
Seems like it is artifact of the topology tests run as part of the distributed test. I'll take a look, doesn't seem to be an issue on OSX though. |
I am not seeing it with the nightly build on Anubis. Don't have ready access to any other Linux machine.... |
cPi = unwrap_unionall(cPi) | ||
if isa(tPi, DataType) && isa(cPi, DataType) && | ||
!tPi.abstract && !cPi.abstract && | ||
sym_isless(cPi.name.name, tPi.name.name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it could lead to really unpredictable effects.
I also think type_more_complex
and is_derived_type
could use a whole bunch of unit tests, mostly as examples illustrating what they do.
This helps ensure that inference converges, by ensuring that any recursion must be finite. Also fixed a number of functions in base that weren't finite under recursion and thus could trigger infinite loops in inference (especially if we tried to infer based just on method signatures). I'm mildly optimistic that this will permit increasing of
tupletype_len
:)fixes #21244
fixes #17278
fixes #20714
fixes #19570