Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Defer calculation of field types until type parameters are known #18466

Closed
andyferris opened this issue Sep 12, 2016 · 46 comments
Closed
Labels
speculative Whether the change will be implemented is speculative types and dispatch Types, subtyping and method dispatch
Milestone

Comments

@andyferris
Copy link
Member

Currently, we can perform a limited set of logic in the type definition, such as:

type A
    a::eltype(Vector{Int})
end

but we can't involve a type parameter, such as:

type B{V <: AbstractVector}
    a::eltype(V)
end

AFAICT, at the moment the field types A.types is calculated when the type is defined and type parameters are inserted into the correct slots as they become known.

However, it would be nice if the types could be calculated by arbitrary inferrable or @pure functions. Another simple example (close to my heart) would be:

immutable StaticMatrix{M,N,T}
    data::NTuple{M*N, T}
end

However this results in an error that multiplication is not defined for TypeVars. Instead, I need all of this code:

immutable StaticMatrix{M,N,T,L}
    data::NTuple{L, T}
    function StaticMatrix(d)
        check_params(Val{L}, Val{M}, Val{N})
        new(d)
    end
end

@generated function check_params{L,M,N}(::Type{Val{L}}, ::Type{Val{M}}, ::Type{Val{N}}) # could also be `@pure` in v0.5
    if L != M*N
        error("Type parameters don't match")
    end
end

and my users need to foist around the redundant L paramater when they need to specify a concrete type.

For abstract types, I'm hoping that inference itself could still be used to come up with a least-pessimistic approximation of each field, or otherwise just use Any when that's not possible. If that makes it difficult to avoid regressions, straight types (combinations of types and the relevant TypeVars with apply_type but no other functions) could keep functioning as they currently do.

@vtjnash vtjnash added the won't change Indicates that work won't continue on an issue or pull request label Sep 13, 2016
@vtjnash
Copy link
Member

vtjnash commented Sep 13, 2016

fwiw, there's no advantage of your complicated @generated function (and some disadvantage) over the naive version on any version of Julia (e.g. they end up running the same code):

immutable StaticMatrix{M,N,T,L}
    data::NTuple{L, T}
    function StaticMatrix(d)
        L == M*N || error("Type parameters don't match")
        new(d)
    end
end

also, duplicate of #8472

@vtjnash vtjnash closed this as completed Sep 13, 2016
@andyferris
Copy link
Member Author

andyferris commented Sep 13, 2016

duplicate of #8472

Respectfully, I thought #8472 was much more general than this, and is about doing a fully-@generated type along the lines of GeneratedTypes.jl. I felt this issue isn't really a duplicate since it's suggesting a much narrower scope about allowing a slightly expanded set of pure functions for the types, and should be less disruptive in implementation (the number of fields and their names are fixed). Speaking in-person to @JeffBezanson (or maybe it was Stefan? sorry if I got that wrong!) he seemed reasonably positive about the idea.

fwiw, there's no advantage of your complicated @generated function (and some disadvantage) over the naive version on any version of Julia (e.g. they end up running the same code):

That's not at all what I see. Your code executes a run-time check and my code executes a compile-time check.

Run-time:

julia> immutable StaticMatrix{M,N,T,L}
           data::NTuple{L, T}
           function StaticMatrix(d)
               L == M*N || error("Type parameters don't match")
               new(d)
           end
       end

julia> StaticMatrix{2,2,Int,4}((1,2,3,4))
StaticMatrix{2,2,Int64,4}((1,2,3,4))

julia> @code_native StaticMatrix{2,2,Int,4}((1,2,3,4))
    .text
Filename: REPL[1]
    pushq   %rbp
    movq    %rsp, %rbp
    movq    %fs:0, %rax
    addq    $-2672, %rax            # imm = 0xFFFFFFFFFFFFF590
    vxorps  %xmm0, %xmm0, %xmm0
    vmovups %xmm0, -16(%rbp)
    movq    $4, -32(%rbp)
    movq    (%rax), %rcx
    movq    %rcx, -24(%rbp)
    leaq    -32(%rbp), %rcx
    movq    %rcx, (%rax)
Source line: 5
    vmovups (%rdx), %ymm0
    vmovups %ymm0, (%rdi)
    movq    -24(%rbp), %rcx
    movq    %rcx, (%rax)
    movq    %rdi, %rax
    popq    %rbp
    vzeroupper
    retq
    nopl    (%rax)

Compile-time

julia> @generated function check_params{L,M,N}(::Type{Val{L}}, ::Type{Val{M}}, ::Type{Val{N}}) # could also be `@pure` in v0.5
           if L != M*N
               error("Type parameters don't match")
           end
       end
check_params (generic function with 1 method)

julia> immutable StaticMatrix2{M,N,T,L}
           data::NTuple{L, T}
           function StaticMatrix2(d)
               check_params(Val{L}, Val{M}, Val{N})
               new(d)
           end
       end

julia> @code_native StaticMatrix2{2,2,Int,4}((1,2,3,4))
    .text
Filename: REPL[12]
    pushq   %rbp
    movq    %rsp, %rbp
Source line: 5
    vmovups (%rdx), %ymm0
    vmovups %ymm0, (%rdi)
    movq    %rdi, %rax
    popq    %rbp
    vzeroupper
    retq
    nopw    %cs:(%rax,%rax)

@vtjnash am I misinterpreting this? Perhaps the branch is eliminated in the first case (for instance, I don't see call for the error) but the calculation of L == M*N remains? Either way, the code is different...

@yuyichao
Copy link
Contributor

This is basically dup of #15791

Please never use @code_native to tell the difference between two functions. It's almost useless unless you hit a codegen bug or if you are a CPU. The difference in the generated code is due to #17880 and #15369

@vtjnash
Copy link
Member

vtjnash commented Sep 13, 2016

The only operation permissible while constructing a type is allocation (http://docs.julialang.org/en/latest/devdocs/locks/). No other inspection of the system is possible in a thread-safe manner. Additionally, you would need to provide an algorithm for allocating them uniquely during deserialization (and handling any errors), despite the fact that the system is temporarily in an inconsistent state and examining it too closely will lead to errors (e.g. you can't look at the fields of a type, since they might still contain placeholders instead).

@andyferris
Copy link
Member Author

This is basically dup of #15791

That seems closer to the mark. Thanks @yuyichao.

Please never use @code_native to tell the difference between two functions.

Huh? This totally confuses me. I only use @code_native to compare which of many possible implementations runs fastest.

I do understand the @code_native will be different on different CPUs and different versions of Julia. But if I want to ask: will Jameson's code or my code run faster on my computer right now, surely @code_native is the tool I want?

The reason I am using this "trick" is exactly because #17880 and #15369 will be fixed in the future and I am currently using Julia for real-world work now. Stating that two implementations "should" behave the same isn't really of practical value.

PS - @yuyichao I'm curious - could you explain how GC frame generation in #15369 affects my isbits types here?

@andyferris
Copy link
Member Author

@vtjnash Thank you very much for the insight. I totally admit that implementing this could be messy and complicated and might require changes to how the system works, and unless I can help somehow, then it remains up to you guys to figure out what is feasible, or worthwhile given the payoff. I had hoped this would be simpler than #8472 (fully-generated types) since to me as an end-user it seems pretty non-disruptive (affecting only how DataType.types gets populated).

Thanks again. Cheers :)

@andyferris
Copy link
Member Author

This issue is related very much to #8322, which is still open with the ideas I requested here.

@yuyichao
Copy link
Contributor

I do understand the @code_native will be different on different CPUs and different versions of Julia. But if I want to ask: will Jameson's code or my code run faster on my computer right now, surely @code_native is the tool I want?

No. The point is not how reproducible it is across systems but how easy it is to understand the difference. For 99% of the case (the 1% being codegen/llvm bugs) code_llvm contains much more information compare to code_native and for 99.9% of people (i.e. unless you are a CPU or as good as one at understanding assembly) the code_llvm is much easier to understand/learn. FWIW, I hand wrote 2 of the instructions appears in your longer version and I wasn't able to tell at a first glance why it is longer than the second one, OTOH, it was extremely clear from code_llvm and then code_warntype shows that why the extra code is there when they are not needed.

The reason I am using this "trick" is exactly because #17880 and #15369 will be fixed in the future and I am currently using Julia for real-world work now. Stating that two implementations "should" behave the same isn't really of practical value.

The better workaround is to move the condition in a @pure function.

PS - @yuyichao I'm curious - could you explain how GC frame generation in #15369 affects my isbits types here?

#11508

@andyferris
Copy link
Member Author

The point is not how reproducible it is across systems but how easy it is to understand the difference.

Yes llvm is usually easier to read, though I have noticed some operations in @code_llvm are more-or-less elided in @code_native so I guessed llvm does another optimization pass during the translation (actually, it definitely does - many basic functions are call to an llvm-function in @code_llvm while in @code_native I can see which are inlined and which are actually allocating a stack frame, etc).

I think its just unusual that most the code in StaticArrays is extremely simple (e.g. adding two pairs of floats, or the (shorter) constructor above) so I found both formats readable and I used a combination of both. But thank you for the advice - I'll err toward llvm from now on.

The better workaround is to move the condition in a @pure function.

Agreed (there was a comment in my codeblock about that, but you had to scroll sideways... the given version is 0.4 friendly).

#11508

Right, awesome, thank you! I'm glad for that optimization... previously I was really pessimistic about throwing errors in StaticArrays since code is so performance sensitive but I see all this will work out beautifully in the end.

@yuyichao
Copy link
Contributor

I guessed llvm does another optimization pass during the translation

99% of the time not in any way you would care about.

(actually, it definitely does - many basic functions are call to an llvm-function in @code_llvm while in @code_native I can see which are inlined and which are actually allocating a stack frame, etc).

Those are llvm instrinsics (i.e. instructions) and are never function calls to begin with.

@yuyichao
Copy link
Contributor

99% of the time not in any way you would care about.

Exception being #16375

@andyferris
Copy link
Member Author

OK, thanks for all the useful info @yuyichao! I'll read up on the intrinsics, and then I should be good to use @code_llvm more frequently.

(On a side note, @code_warntype has become much less readable in v0.5... my functions are polluted with :metas (such as > 50% of code being metas for inbounds), and I'm still confused about invoke...)

@JeffBezanson
Copy link
Member

#8322 asked for two features, one the same as this issue and one for computed subtyping. I believe those have very different implications, so we should have separate issues for them.

I do think it is possible to support this by storing functions to compute the field types (based on a syntactic heuristic, e.g. a type parameter occurs somewhere other than inside a type application A{B}). Until all type parameters are known, we can just assume those fields are Any. Those will be abstract types anyway.

@JeffBezanson JeffBezanson reopened this Sep 13, 2016
@JeffBezanson JeffBezanson removed the won't change Indicates that work won't continue on an issue or pull request label Sep 13, 2016
@kshyatt kshyatt added the types and dispatch Types, subtyping and method dispatch label Sep 13, 2016
@StefanKarpinski StefanKarpinski added this to the 1.0 milestone Sep 13, 2016
@StefanKarpinski StefanKarpinski added the speculative Whether the change will be implemented is speculative label Sep 13, 2016
@andyferris
Copy link
Member Author

Thanks guys.

,,,I believe those have very different implications, so we should have separate issues for them.

I do think it is possible to support this by storing functions to compute the field types (based on a syntactic heuristic, e.g. a type parameter occurs somewhere other than inside a type application A{B}). Until all type parameters are known, we can just assume those fields are Any. Those will be abstract types anyway.

Agreed.

@vtjnash
Copy link
Member

vtjnash commented Sep 15, 2016

Until all type parameters are known, we can just assume those fields are Any

Fair enough. If we prohibit types with generated fields from being isbits (e.g. guarantee they will be heap allocated) that should likely help with making the layout and precompile serialization computable (and not cause #18343, #16767, & friends to be reopened).

@andyferris
Copy link
Member Author

Do you mean just the abstract types? For the concrete types, isbits would be crucial in some circumstances, especially for the StaticMatrix example above.

@tehrengruber
Copy link

I needed this desperately so I've written an intermediate solution that uses macros and does allow correct type inference. Its still a prototype and a bit hacky, but it works :-). I guess one would implement this differently if one wants to add language support, but I hope its useful to everyone that is waiting for this feature.
The code can be found at https://github.com/tehrengruber/ExtendedParametricTypes

@andyferris
Copy link
Member Author

Nice.

@vtjnash
Copy link
Member

vtjnash commented Jan 22, 2017

there seems to be some weird, non-hygienic eval / Base.return_types in that code. It seems like the correct approach would be simply to call eval(current_module(), expr.args[1]) there?

@tehrengruber
Copy link

tehrengruber commented Jan 22, 2017

I didn't want to eval expr.args[1] directly since it could be a more complex expression then just the unparameterized type name. When the evaluation of expr.args[1] depends on the context there is no way to deduce the correct unparameterized type name (which is needed for a type stable expansion) on macro expansion. To check whether expr.args[1] depends on the context I just use type inference. I compile a closure :(() -> $(expr.args[1])) and in case the return type is inferred to Type{A} with A being concrete (i. e. not abstract) I know the unparameterized type name on macro expansion and can generate a type-stable expansion. Otherwise I fall back to a non type-stable version (which does not need to run eval).

Here is an example to illustrate where eval(current_module(), expr.args[1]) would fail.

function some_function(t::DataType)
    @EPT(t{Int})
end

In general using eval here is not bulletproof for sure. If one has a const t=Int defined in the module the EPT macro expansion would be wrong I guess, but for now thats tolerable.

Another nice side effect of using Base.return_types is that I do not directly evaluate code that was passed to the macro, which is a bad practice boundary I didn't want to cross with my module (its ugly enough).

PS: Further discussion should be probably be made in the issue tracker of the package.

@andyferris
Copy link
Member Author

you don't need to ever call fulltype with my package now. however, if you don't have / call fulltype, the compiler will heap-allocate the value.

With the expectation that the hidden ones would usually not participate in dispatch, show, etc. (although it should still be possible)

this is literally exactly what my package implements.

Right, yes, I completely understand both - your package seems very useful, complete and simple. I really like it! Nonetheless, I would be using fulltype for performance optimizations. For example, at work we store data in a Vector{SMatrix{3,3,Float64,9}}, take them out of the vector and onto the stack, and do a specialized 3x3 eigendecomposition on each, put the results in another vector, etc. If I used non-concrete element types like Vector{SMatrix{3,3,Float64}}, then performance would be toast!

My speculation was about how to remove fulltype in the (smallish) set of cases where you want to explicitly name a concrete type to improve performance, e.g. to construct a container. There have been multiple occasions where I have added that 9 to co-workers' code so they can get the performance they need. They do end up learning something important about Julia and my implementation of StaticArrays, but it would be simpler if it "just worked" the way they typed it.

@vtjnash
Copy link
Member

vtjnash commented Jan 24, 2017

The trouble with Vector{SMatrix{3,3,Float64}} is that regardless of where you hide the computed parameter, it's still effectively present somewhere. If you implement this in the language, then you wouldn't have the fulltype function available to force it into being an isbits type, so it would instead likely need to always be heap allocated pointer array. I assume this isn't what you would want for performance! Whereas with the package version, it permits you to pick between (a) the heap-allocated non-parameterized version and (b) the "fulltype" inferred version.

@andyferris
Copy link
Member Author

andyferris commented Jan 24, 2017

If you implement this in the language, then you wouldn't have the fulltype function available to force it into being an isbits type, so it would instead likely need to always be heap allocated pointer array.

I think you might have misunderstood my suggestion. I was saying that if you apply a type variable to a type, and that type then has all non-hidden parameters filled, then the hidden parameters will be calculated always and automatically.

So going back to the supposed semicolon notation, if I had SM = SMatrix{3,3,T; L} where T where L and then I write SM{Float64} the output would automatically become SM{Float64} == SMatrix{3,3,Float64; 9}. So we would have SMatrix{3,3,Float64} === SMatrix{3,3,Float64; 9}. (However, the L and 9 would be hidden from the user.) In this world. the type SMatrix{3,3,Float64; L} would never be constructed or added to the type cache. The C code would have to be changed to do this. (Note: I'm not saying this is desirable or even if this is a good solution to the problem... perhaps dealing with such field computations more directly without intermediate type variables would be more elegant).

@vtjnash
Copy link
Member

vtjnash commented Jan 24, 2017

I'm not misunderstanding, I'm just pointing out that wrapping this in different syntax has no actual impact on what is possible for the underlying implementation. In order to fill in the extra parameters, you need an eval stage. In my example package, that eval stage is called fulltype. There could be an eval stage in base or not, but it can't simply be ignored under the label "auto-magic".

@martinholters
Copy link
Member

I've been pondering using StaticArrays in ACME for quite a while. And I think it would be a perfect fit, if it wasn't for the proliferation of type parameters. It's based on state-space models, which (in the linear case) operate as

x(n) = Ax(n-1) + Bu(n)
y(n) = Cx(n-1) + Du(n)

That would translate to (fixing the eltype for simplicity):

struct StateSpaceSystem{Nx,Nu,Ny,NxNx,NxNu,NyNx,NyNu}
    A::SMatrix{Nx,Nx,Float64,NxNx}
    B::SMatrix{Nx,Nu,Float64,NxNu}
    C::SMatrix{Ny,Nx,Float64,NyNx}
    D::SMatrix{Ny,Nu,Float64,NyNu}
end

Bad enough. But actually, I'm simulating non-linear systems, which I incorporate by using even more matrices (with interdependent sizes, of course). Also, I'm very concerned about speed (why else spend the effort of changing to StaticArrays), so I really want all types concrete to avoid dynamic dispatch. Is there some best practice how to cope with the situation?

@vtjnash
Copy link
Member

vtjnash commented Jan 31, 2018

Is there some best practice how to cope with the situation?

struct StateSpaceSystem{Nx, Nu, Ny,
        AT <: SMatrix{Nx, Ny},
        BT <: SMatrix{Nx, Nu},
        CT <: SMatrix{Ny, Nx},
        DT <: SMatrix{Ny, Nu}}
    A::AT
    B::BT
    C::CT
    D::DT
end

Or you could use https://github.com/vtjnash/ComputedFieldTypes.jl to achieve the exact same result as proposed in this issue, although I think the proposal in this issue may be more cumbersome than using the above (although it does provide an implementation of fulltype for you, for taking a StateSpaceSystem{Nx, Nu, Ny} and filling in the rest of the parameters).

My speculation was about how to remove fulltype

If you don't like using fulltype, just ignore it. Just because my package provides this extra functionality over the proposal in this issue doesn't mean you have to use it.

mbauman added a commit that referenced this issue Apr 19, 2018
* origin/master: (22 commits)
  separate `isbitstype(::Type)` from `isbits` (#26850)
  bugfix for regex matches ending with non-ASCII (#26831)
  [NewOptimizer] track inbounds state as a per-statement flag
  change default LOAD_PATH and DEPOT_PATH (#26804, fix #25709)
  Change url scheme to https (#26835)
  [NewOptimizer] inlining: Refactor todo object
  inference: enable CodeInfo method_for_inference_limit_heuristics support (#26822)
  [NewOptimizer] Fix _apply elision (#26821)
  add test case from issue #26607, cfunction with no args (#26838)
  add `do` in front-end deparser. fixes #17781 (#26840)
  Preserve CallInst metadata in LateLowerGCFrame pass.
  Improve differences from R documentation (#26810)
  reserve syntax that could be used for computed field types (#18466) (#26816)
  Add support for Atomic{Bool} (Fix #26542). (#26597)
  Remove argument restriction on dims2string and inds2string (#26799) (#26817)
  remove some unnecessary `eltype` methods (#26791)
  optimize: ensure merge_value_ssa doesn't drop PiNodes
  inference: improve tmerge for Conditional and Const
  ensure more iterators stay type-stable
  code loading docs (#26787)
  ...
mbauman added a commit that referenced this issue Apr 23, 2018
* origin/master: (23 commits)
  fix deprecations of \cdot and \times (#26884)
  Support reshaping custom 0-dimensional arrays (#26870)
  fix some cases of dot syntax lowering (#26878)
  Pkg3: deterministically close the LibGit2 repo in tests (#26883)
  code loading docs: add missing graph edge (#26874)
  add news for #26858 and #26859 [ci skip] (#26869)
  Deprecate using && and || within at-dot expressions (#26792)
  widen `Int8` and `Int16` to `Int` instead of `Int32` (#26859)
  fix #26038, make `isequal` consistent with `hash` for `Ptr` (#26858)
  Deprecate variadic size(A, dim1, dim2, dims...) method (#26862)
  add using Random to example in manual (#26864)
  warn once instead of depwarn since we want to test it
  Revert "reserve syntax that could be used for computed field types (#18466) (#26816)" (#26857)
  Fix compilation on LLVM 6.0
  change promotion behaviour of `cumsum` and `cumsum!` to match `sum`
  [LLVM 6] add patch to diamond if-conversion
  add a precompile command that can be used to precompile all dependencies (#254)
  use registry if no version entry exist in project for developed pacakges
  make Pkg3 work as a drop in for the old CI scripts
  update registries when adding (#253)
  ...
thchr added a commit to thchr/Spglib.jl that referenced this issue Dec 3, 2021
`MMatrix{3,3,L}` is not a concrete type, unfortunately. I figured I'd suggest changing this so inference of `getfield(::Cell, :lattice)` isn't poisoned.

The same problem affects the `positions` but it cannot be fixed simply while retaining the `MMatrix` approach cf. JuliaLang/julia#18466: personally, I think it'd make more sense to treat that as a `Vector{MVector{3,P}}` - it doesn't seem worthwhile to specialize to how many atoms are in a cell. Anyway, that is a larger change and I just wanted to do the minimal change here.
@vtjnash
Copy link
Member

vtjnash commented Mar 30, 2023

I know there is some debate about this, but marking as a duplicate of #8472 to consolidate discussion there.

@vtjnash vtjnash closed this as completed Mar 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
speculative Whether the change will be implemented is speculative types and dispatch Types, subtyping and method dispatch
Projects
None yet
Development

No branches or pull requests