Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to Core.Compiler? #40

Open
timholy opened this issue Jun 13, 2024 · 6 comments
Open

Move to Core.Compiler? #40

timholy opened this issue Jun 13, 2024 · 6 comments

Comments

@timholy
Copy link

timholy commented Jun 13, 2024

To me it feel that @stable would make more sense in Base. The implementation seems fairly straightforward:

  • @stable annotates a function body via the meta mechanism (essentially trivial)
  • In inference, we add a check for a :stable meta annotation, and if found then once a function is done being inferred, we impose a rule: if the input types are concrete and the inferred output types are not, throw an error (during compilation)

However, it might be even better as a new AbstractInterpreter type. I've never played with that part of inference, so it might take some digging/asking to figure out what that implementation would look like.

@MilesCranmer
Copy link
Owner

I am 100% for this!

On LinkedIn @ChrisRackauckas mentioned he was going to talk to the compiler team about potentially doing this, though I am not sure what the status is.

Would also be nice to have the same features in Base with respect to configuration in Preferences.jl, like this: https://github.com/MilesCranmer/DispatchDoctor.jl?tab=readme-ov-file#usage-in-packages. I have found it useful to turn on instability checks for multiple packages during testing, while still having the "default" be disabled (in case downstream users are doing unstable things on purpose)

@ChrisRackauckas
Copy link

I had mentioned it to @topolarity and @gbaraldi, though I know they are a bit busy trying to get everything juliac ready for JuliaCon (https://pretalx.com/juliacon2024/talk/FUTLND/) as a "big reveal" of small binaries. But I do hope that we can find some of their time to do this properly. Again as I mentioned elsewhere, I think the interface here is solid, but I don't think the implementation is quite correct. It needs to run after Julia's compiler passes as there's the heuristics used here are not always true. For example, #31 union splitting at a high level is okay, but users can actually change the heuristic on a per-module basis, and there's other inference heuristics and effects analysis that could make unions split or not split. There's other issues as well, for example you might have a piece of code that doesn't infer, but due to DCE that code is just eliminated by the compiler so it turns into being a non-issue.

So at the end of the day, the only way to really know if you got "bad" behavior is to ask Julia if you have bad behavior after its compiler optimizations. Pre-optimizations will have false positives, and heuristics to remove those would introduce false negatives.

I had suggested at the LLVM level #30 because it could then just make use of the same infrastructure as https://github.com/JuliaLang/AllocCheck.jl. Using the LLVM level, you just need to flag functions that have jl_apply_generic (or correct me if I'm wrong, there might be another function or two, but it's at least a small finite list), and that would give you every spot where dynamic dispatch occurred in the final compiled code. That's actually just a subset of AllocCheck, since any dynamic dispatch will introduce allocations, and so it already effectively finds all of those spots.

So my first suggestion is to just "extend" alloc check to have a second macro that's just for dynamic dispatch checking (which IIUC is just subsetting the whitelist that already exists), and then using the ideas of DispatchDoctor.jl in order to have a better API. And I think we should use the DispatchDoctor.jl API ideas on the allocation-free checks as well, being able to annotate a function and have it only run in tests and such is a very nice idea. That implementation shouldn't take all that long and would get us a correct and usable form that handles both dynamic dispatch and allocation testing with a really nice high level API. So I'd "rush" to get that together as a generally usable tool.

From there, then I think we should consider what the next steps would be. Allocation testing cannot be moved to the Julia level IIUC because that will only be correct after LLVM optimizations, while dynamic dispatch is a Julia level thing that could be done in the AbstractInterpreter post inference and Julia level optimizations like DCE? That's a question mark from me as I'll let the compiler dev's answer that 😅. But I think the relative improvement of doing that next step of getting an AbstractInterpreter implementation vs the LLVM level one is low, as the LLVM level would already solve the correctness, though the AbstractInterpreter level would retain more high level information so it could have better error messages. So I personally wouldn't put that part of it at the highest priority.

@MilesCranmer
Copy link
Owner

MilesCranmer commented Jun 15, 2024

there's other inference heuristics and effects analysis that could make unions split or not split

Indeed, this seems slightly tricky to do without going into the internals. Although perhaps a downstream type instability might be able to catch union splitting from the outermost Base.promote_op.

DCE

How different is Base.promote_op from what you have in mind? Base.promote_op which is used currently looks to be an alias of Core.Compiler.return_type and is also used for things like figuring out the eltype in a list comprehension. So, is the main idea here to try to flag dynamic dispatch internal to a function?

and that would give you every spot where dynamic dispatch occurred in the final compiled code

Just to note that this would be slightly different behaviour from the existing @stable, which only checks for type stable return values, rather than any instances of dynamic dispatch in the body. The idea is that if you have some logging callback that does dynamic dispatch, it would not get flagged (whether that be positive or negative). In any case, I think that erroring on internal dispatch should be an optional flag as in certain contexts it is acceptable and it is just the return values that kill performance.

Right now, this is done with @unstable, which lets you declare certain functions as allowed. So, for example:

@stable begin

f(x) = x
g(x) = x
@unstable logging_callback(x) = ...
function h(x)
    logging_callback(x)
    f(x) + h(x)
end

end

h(1)

In this example, f, g, and h are all checked for unstable return values.

logging_callback would normally be checked as well, although here we have declared it @unstable which toggles back off the check.

@JeffBezanson
Copy link

I am just ducking in here quickly since any mention of moving things to Core or Base is the bat signal for me :) I think this is a cool package for users who want it, especially at the application level, but there are conceptual problems with @stable. Functions cannot really be divided into two kinds, must-be-stable and can-be-unstable, such that it would make sense to mark each function with its "kind". For example getindex(::Vector, ::Int) is only stable for vectors with concrete element types. Also these days many functions achieve type stability through constant propagation.

Just to note that this would be slightly different behaviour from the existing @stable, which only checks for type stable return values, rather than any instances of dynamic dispatch in the body.

I agree with checking return types being more useful; disallowing dynamic dispatch turns out to be very restrictive and often unrelated to performance, e.g. for logging and error paths.

@ChrisRackauckas
Copy link

I agree with checking return types being more useful; disallowing dynamic dispatch turns out to be very restrictive and often unrelated to performance, e.g. for logging and error paths.

Depends on the use case, one of our big use cases for these kinds of tools is codegen for static libraries where we want to remove the runtime, or at least hope juliac would remove the runtime. That has stricter requirements than just using things from Julia, and I think @gbaraldi mentioned the other day that he ran into this in that it's hard to ensure that library code satisfies the requirements for such an application.

@MilesCranmer
Copy link
Owner

MilesCranmer commented Jun 18, 2024

I can definitely see use-cases for both of these modes. What I think would be nice is just have another option for this, like perhaps "extent". Along the lines of the existing options, you could have:

  • default_extent::String="return_only":
    • Change the default extent from "return_only" to "full" to also detect dynamic dispatch within the function body, analysing all compiled LLVM code after optimisations
    • To locally or globally override the mode for a package that uses DispatchDoctor, you can use the "instability_check_extent" key in your LocalPreferences.toml (typically configured with Preferences.jl).

Then you could configure this as another parameter with Preferences.jl, and even set it at a package level.

@stable default_extent="full" begin
    f(x) = x
    g(x) = x^2
end

@ChrisRackauckas if AllocCheck.jl adds a detector for dynamic dispatch, perhaps DispatchDoctor.jl could alias it as a dependency, and call it whenever the user passes extent="full"? And then DispatchDoctor.jl could still perform the return-value analysis as it does currently if extent="return_only".

P.S.,

For example getindex(::Vector, ::Int) is only stable for vectors with concrete element types. Also these days many functions achieve type stability through constant propagation.

Just for anybody scrolling through this thread, I will just note that @stable applies to a method rather than a function. So the way to solve cases such as the one @JeffBezanson mentioned is to use:

@stable getindex(::MyVector, ::Int) = ...
@unstable getindex(::MyVector{MyUnstableType}, ::Int) = ...

The same advice applies for constant propagation. If DispatchDoctor starts complaining about an instability in some function you know will be inlined, just stick an @unstable on it, and you are good! The @stable on the outermost function will catch if that function is not being inlined any inference problems.

It might get tedious if you have many cases of this but I think it's still useful to declare explicitly what you expect to be unstable and inlined.

Also I think it's useful to only enable stability analysis during testing (with Preferences.jl), just in case downstream users are doing unstable stuff on purpose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants