Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use tagged CodeInstances for AbstractInterpreter caching #52233

Merged
merged 3 commits into from
Feb 10, 2024
Merged

Conversation

vchuravy
Copy link
Member

@vchuravy vchuravy commented Nov 19, 2023

Currently external AbstractInterpreters all follow the pattern
that CACHE = IdDict{MethodInstance,CodeInstance}. The NativeInterpreter
has the benefit that each MethodInstance carries an inline cache of CodeInstances.

CodeInstances pull triple duty, they contain validity information (min_world, max_world),
the inference state cache and the compilation state cache.

When we currently create a CodeInstance we don't record the owner and we thus construct detached CodeInstances.

In this PR I introduce the notion of tagging the code instances with the owner (nothing meaning native),
thus allowing foreign code instances to be stored in the inline cache.

GPUCompiler/CUDA would change it's caching strategy from IdDict{MethodInstance, CodeInstance} to IdDict{CodeInstance, LLVM.Module}
or similar. Ideally we would allow polymorphism for the compilation state part of the code instance, but that seems to invasive for now.

This has the benefit that CodeInstances are handled by caching and it would allow us to cache inference results
from foreign abstract interpreters across sessions.

The downside is that the cache walk now is a bit slower since we need to filter on owner. If this turns out to be
a large bottleneck we could bifurcate the cache on owner.

TODO:

Unresolved

  • What to do about epheremal users like REPLInterpreter, should those go into mi.cache
    • Same question for applies to Cthulhu. I would prefer not having to support the old caching style (since it won't get backedge updates)
  • Uses of jl_rettype_inferred_native, which ones should receive a owner token.

@vchuravy vchuravy added speculative Whether the change will be implemented is speculative compiler:inference Type inference caching labels Nov 19, 2023
@Keno
Copy link
Member

Keno commented Nov 20, 2023

I think this is fine, but I do wonder whether this is slightly premature. Just to clarify, the primary motivation here is not the performance of the IdDict implementation, but cache maintenance on world age invalidation? That seems like a good enough reason to me. My primary concern is that I'm not sure whether we may need to invalidate other things as part of the CodeInstance also. This kind of change just feels like the kind of thing you do once everything else is figured out.

@vchuravy
Copy link
Member Author

but cache maintenance on world age invalidation? That seems like a good enough reason to me.

This is motivated by what GPUCompiler need today, We have a sever latency issue for GPU based codes that is prohibitive for both HPC and application deployment.
The primary source of the latency is cross-session inference caches, and while we spend quite some time looking at building alternatives in the GPUCompiler stack,
re-using the base infrastructure seems to be the most straightforward, we do need to be careful that this information isn't resused by the wrong abstract interpreter,
so tagging the CodeInstance for validation purposes would have been an independent sensible change.

My primary concern is that I'm not sure whether we may need to invalidate other things as part of the CodeInstance

Yeah I would love some additional flexibility with what could be stored in a CodeInstance, but I am planning on doing forward validation for any information based on this.

@Keno
Copy link
Member

Keno commented Nov 20, 2023

Ok, I didn't quite get the serialization implications of this change. Regardless, as I said, I don't think there's anything particularly wrong with it. 234a758 went a similar direction with respect to external interpreter analyses.

// if (ci !in ci->defs->cache)
// record_field_change((jl_value_t**)&ci->next, NULL);
// Why are we checking that the method/module this orignates from is in_image?
// and then disconnect this CI?
if (jl_object_in_image((jl_value_t*)ci->def->def.value)) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vtjnash @timholy do either of you remember why we have this additional condition here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to be late to the party. It was added in #48751 so @vtjnash is the expert, but I interpret this as a way to truncate traversal. The point is that if you've already cached the method you've brought in everything you should and nothing you shouldn't (e.g.,

julia/src/staticdata.c

Lines 791 to 799 in a28e553

else if (jl_is_method(def) && jl_object_in_image(def)) {
// we only need 3 specific fields of this (the rest are restored afterward, if valid)
// in particular, cache is repopulated by jl_mi_cache_insert for all foreign function,
// so must not be present here
record_field_change((jl_value_t**)&mi->uninferred, NULL);
record_field_change((jl_value_t**)&mi->backedges, NULL);
record_field_change((jl_value_t**)&mi->callbacks, NULL);
record_field_change((jl_value_t**)&mi->cache, NULL);
}
).

@vchuravy vchuravy marked this pull request as ready for review November 23, 2023 02:06
@vchuravy
Copy link
Member Author

@aviatesk I think this is ready for a first review.

@@ -86,7 +83,7 @@ struct EscapeCacheInfo
end

struct EscapeCache
cache::IdDict{MethodInstance,EscapeCacheInfo}
cache::IdDict{MethodInstance,EscapeCacheInfo} # Should this be CodeInstance to EscapeCacheInfo?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aviatesk this is something we should talk about. In my head this should probably be an IdDict{CodeInstance, EscapeCacheInfo} since we are attaching information to a particular inference task and we store that information in the CodeInstance

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree. I would like to work on that refactoring to see what possibilities this change opens up, if you want.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be great!

@aviatesk aviatesk self-assigned this Nov 24, 2023
base/compiler/cicache.jl Outdated Show resolved Hide resolved
@test any(INVALIDATION_TESTER_CACHE.dict) do (mi, ci)
mi.def.name === :basic_caller

let mi = only(Base.specializations(only(methods(basic_caller))))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make this logic simpler after #52176

base/compiler/cicache.jl Outdated Show resolved Hide resolved
@vchuravy
Copy link
Member Author

@nanosoldier runtests(configuration = (buildflags=["LLVM_ASSERTIONS=1", "FORCE_ASSERTIONS=1"],), vs_configuration = (buildflags = ["LLVM_ASSERTIONS=1", "FORCE_ASSERTIONS=1"],))

base/compiler/cicache.jl Outdated Show resolved Hide resolved
@nanosoldier
Copy link
Collaborator

The package evaluation job you requested has completed - possible new issues were detected.
The full report is available.

@vchuravy
Copy link
Member Author

Mostly just needed API changes, but Krotov is worth a closer look.

BoundsError(a=Array{Any, 1}(dims=(207,), mem=Memory{Any}(207, 0x7f53d847dc60)[SSAValue(1), SSAValue(2), SSAValue(3), SSAValue(4), nothing, SSAValue(5), SSAValue(6), SSAValue(7), SSAValue(8), SSAValue(9), SSAValue(10), SSAValue(11), SSAValue(12), SSAValue(13), SSAValue(14), SSAValue(15), SSAValue(16), SSAValue(17), SSAValue(18), SSAValue(19), SSAValue(20), SSAValue(21), SSAValue(22), SSAValue(23), SSAValue(24), SSAValue(25), SSAValue(26), SSAValue(27), SSAValue(28), SSAValue(29), SSAValue(30), SSAValue(31), SSAValue(32), SSAValue(33), SSAValue(34), SSAValue(35), SSAValue(36), SSAValue(37), SSAValue(38), SSAValue(39), SSAValue(40), SSAValue(41), SSAValue(42), SSAValue(43), SSAValue(44), SSAValue(45), SSAValue(46), SSAValue(47), SSAValue(48), SSAValue(49), SSAValue(50), SSAValue(51), SSAValue(52), SSAValue(53), SSAValue(54), SSAValue(55), nothing, SSAValue(56), SSAValue(57), SSAValue(58), SSAValue(59), nothing, SSAValue(61), SSAValue(62), SSAValue(63), SSAValue(64), SSAValue(65), SSAValue(66), SSAValue(67), SSAValue(68), SSAValue(69), SSAValue(70), SSAValue(71), SSAValue(72), SSAValue(73), SSAValue(74), SSAValue(75), SSAValue(76), SSAValue(77), SSAValue(78), SSAValue(79), SSAValue(80), SSAValue(81), SSAValue(82), SSAValue(83), SSAValue(84), SSAValue(85), SSAValue(86), SSAValue(87), SSAValue(88), SSAValue(89), SSAValue(90), SSAValue(91), SSAValue(92), SSAValue(93), SSAValue(94), SSAValue(95), SSAValue(96), SSAValue(97), SSAValue(98), SSAValue(99), SSAValue(100), SSAValue(101), SSAValue(102), SSAValue(103), SSAValue(104), SSAValue(105), SSAValue(106), SSAValue(107), SSAValue(108), SSAValue(109), SSAValue(110), SSAValue(111), SSAValue(112), SSAValue(113), SSAValue(114), SSAValue(115), SSAValue(116), SSAValue(117), SSAValue(118), SSAValue(119), SSAValue(120), SSAValue(121), SSAValue(122), SSAValue(123), SSAValue(124), SSAValue(125), SSAValue(126), SSAValue(127), SSAValue(128), SSAValue(129), SSAValue(130), SSAValue(131), SSAValue(132), SSAValue(133), SSAValue(134), SSAValue(135), SSAValue(136), nothing, SSAValue(137), SSAValue(138), SSAValue(139), SSAValue(140), SSAValue(141), SSAValue(142), SSAValue(143), SSAValue(144), SSAValue(145), SSAValue(146), SSAValue(147), SSAValue(148), SSAValue(149), nothing, nothing, nothing, SSAValue(150), nothing, nothing, nothing, nothing, SSAValue(153), SSAValue(154), SSAValue(155), SSAValue(156), SSAValue(157), SSAValue(158), SSAValue(159), SSAValue(160), SSAValue(161), SSAValue(162), SSAValue(163), SSAValue(164), SSAValue(165), SSAValue(166), SSAValue(167), SSAValue(168), SSAValue(169), SSAValue(170), SSAValue(171), SSAValue(172), SSAValue(173), SSAValue(174), SSAValue(175), SSAValue(176), SSAValue(177), SSAValue(178), SSAValue(179), SSAValue(180), SSAValue(181), SSAValue(182), SSAValue(183), SSAValue(184), SSAValue(185), SSAValue(186), SSAValue(187), SSAValue(188), SSAValue(189), SSAValue(190), SSAValue(191), SSAValue(192), SSAValue(193), SSAValue(194), SSAValue(195), SSAValue(196), SSAValue(197), SSAValue(198), SSAValue(199)]), i=(256,))
throw_boundserror at ./essentials.jl:14
getindex at ./essentials.jl:811 [inlined]
fixup_node at ./compiler/ssair/ir.jl:1819
fixup_phinode_values! at ./compiler/ssair/ir.jl:1796
fixup_node at ./compiler/ssair/ir.jl:1805
just_fixup! at ./compiler/ssair/ir.jl:1853
just_fixup! at ./compiler/ssair/ir.jl:1843 [inlined]
non_dce_finish! at ./compiler/ssair/ir.jl:1893 [inlined]
finish at ./compiler/ssair/ir.jl:1902 [inlined]
compact! at ./compiler/ssair/ir.jl:1933
run_passes_ipo_safe at ./compiler/optimize.jl:903
run_passes_ipo_safe at ./compiler/optimize.jl:918 [inlined]
optimize at ./compiler/optimize.jl:892
jfptr_optimize_44501.1 at /opt/julia/lib/julia/sys.so (unknown line)
_jl_invoke at /source/src/gf.c:2881 [inlined]
ijl_apply_generic at /source/src/gf.c:3063
_typeinf at ./compiler/typeinfer.jl:264
typeinf at ./compiler/typeinfer.jl:216
typeinf_ext at ./compiler/typeinfer.jl:1040
typeinf_ext_toplevel at ./compiler/typeinfer.jl:1074
typeinf_ext_toplevel at ./compiler/typeinfer.jl:1070

But this may be unrelated to this PR.

base/compiler/cicache.jl Outdated Show resolved Hide resolved
@vchuravy vchuravy removed the speculative Whether the change will be implemented is speculative label Nov 29, 2023
Copy link
Member

@aviatesk aviatesk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally think it's good, particularly the way it simplifies the external AbstractInterpreter's code cache. On the other hand, as Keno pointed out, I'm not completely clear on the serialization support. Am I right in thinking that this PR itself enables the serialization of CodeInstance inferred by an external AbstractInterpreter?

@@ -315,7 +315,7 @@ jl_datatype_t *jl_mk_builtin_func(jl_datatype_t *dt, const char *name, jl_fptr_a
jl_atomic_store_relaxed(&m->unspecialized, mi);
jl_gc_wb(m, mi);

jl_code_instance_t *codeinst = jl_new_codeinst(mi,
jl_code_instance_t *codeinst = jl_new_codeinst(mi, jl_nothing,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe define

JL_DLLEXPORT jl_code_instance_t *jl_new_codeinst_native(
        jl_method_instance_t *mi,
        jl_value_t *rettype, jl_value_t *exctype,
        jl_value_t *inferred_const, jl_value_t *inferred,
        int32_t const_flags, size_t min_world, size_t max_world,
        uint32_t ipo_effects, uint32_t effects, jl_value_t *analysis_results,
        uint8_t relocatability
        /*, jl_array_t *edges, int absolute_max*/)
{
    return jl_new_codeinst(mi, /*owner*/jl_nothing, rettype, exctype, inferred_const, inferred,
                           const_flags, min_world, max_world, ipo_effects, effects,
                           analysis_results, relocatability
                           /*, edges, absolute_max*/);
}

and make it more explicit where we have native compilation specific code?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The next step would be to create a field in jl_task_t and use that to propagate the compilation context, which will remove the _native function again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementing it with #50958 e.g. as a reference seems feasible. If that's the case, it might be best to keep the current code unchanged.

src/gf.c Show resolved Hide resolved
stdlib/REPL/src/REPLCompletions.jl Outdated Show resolved Hide resolved
test/compiler/newinterp.jl Outdated Show resolved Hide resolved
test/compiler/newinterp.jl Outdated Show resolved Hide resolved
Comment on lines -1609 to -1631
// call external callbacks registered with this method_instance
static void invalidate_external(jl_method_instance_t *mi, size_t max_world) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's nice we no longer need this since agged CodeInstance automatically gains support for invalidation -- CodeInstance inferred by each AbstractInterpreter is appended to the next linked list, and they are invalidated by manipulating max_world, irrespective of their owner.

@@ -442,7 +442,8 @@ function custom_lookup(mi::MethodInstance, min_world::UInt, max_world::UInt)
end
end
end
return CONST_INVOKE_INTERP.code_cache.dict[mi]
# XXX: This seems buggy, custom_lookup should probably construct the absint on demand.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hacky. We need a way to set a contextual AbstractInterpreter globally.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand it AbstractInterpreters are ephemeral. Since they capture the world they operate in on construction.

@@ -86,7 +83,7 @@ struct EscapeCacheInfo
end

struct EscapeCache
cache::IdDict{MethodInstance,EscapeCacheInfo}
cache::IdDict{MethodInstance,EscapeCacheInfo} # Should this be CodeInstance to EscapeCacheInfo?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree. I would like to work on that refactoring to see what possibilities this change opens up, if you want.

test/compiler/invalidation.jl Show resolved Hide resolved
test/compiler/invalidation.jl Outdated Show resolved Hide resolved
aviatesk added a commit that referenced this pull request Feb 13, 2024
For external abstract interpreters like JET, which propagate custom data
interprocedurally, it is essential to inject custom behaviors into where
cached regular/constant inference results are used. Previously, this was
accomplished by overloading both `abstract_call_method` and
`get(::WorldView{CustomView}, ...)` (and `const_prop_call` and
`cached_lookup`), that method was admittedly hacky and should probably
better to be avoided. Moreover, after #52233, doing so has become
infeasible when the external abstract interpreter uses `InternalCodeCache`.

To address this issue, this commit provides an interface named
`return_cached_result`. This allows external abstract interpreters to
inject custom interprocedural data propagation during abstract
interpretation even when they use `InternalCodeCache`.
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 13, 2024
Leverages JuliaLang/julia#52233 to use the internal code cache that
comes with the inherent invalidation support.

Still requires:
- JuliaLang/julia#53300 (or JuliaLang/julia#53219)
- JuliaLang/julia#53318
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 13, 2024
This lacks invalidation support (since this commit still uses the
external code cache), but this lets JET to run on the latest nightly
at least.
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 13, 2024
This lacks invalidation support (since this commit still uses the
external code cache), but this lets JET to run on the latest nightly
at least.
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 13, 2024
This lacks invalidation support (since this commit still uses the
external code cache), but this lets JET to run on the latest nightly
at least.
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 13, 2024
Leverages JuliaLang/julia#52233 to use the internal code cache that
comes with the inherent invalidation support.

Still requires:
- JuliaLang/julia#53300 (or JuliaLang/julia#53219)
- JuliaLang/julia#53318
aviatesk added a commit to aviatesk/julia that referenced this pull request Feb 13, 2024
It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After JuliaLang#52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat flawed.
A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in `CC.get(::WorldView{InternalCodeCache})`
  to enable safe redirection of custom data to the native interpreter's
  implementation.
- Prohibiting custom data in the `inferred` field and directing such
  data to be kept in `analysis_results`.
aviatesk added a commit to aviatesk/julia that referenced this pull request Feb 13, 2024
For external abstract interpreters like JET, which propagate custom data
interprocedurally, it is essential to inject custom behaviors into where
cached regular/constant inference results are used. Previously, this was
accomplished by overloading both `abstract_call_method` and
`get(::WorldView{CustomView}, ...)` (and `const_prop_call` and
`cached_lookup`), that method was admittedly hacky and should probably
better to be avoided. Moreover, after JuliaLang#52233, doing so has become
infeasible when the external abstract interpreter uses `InternalCodeCache`.

To address this issue, this commit provides an interface named
`return_cached_result`. This allows external abstract interpreters to
inject custom interprocedural data propagation during abstract
interpretation even when they use `InternalCodeCache`.
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 14, 2024
Leverages JuliaLang/julia#52233 to use the internal code cache that
comes with the inherent invalidation support.

Still requires:
- JuliaLang/julia#53300 (or JuliaLang/julia#53219)
- JuliaLang/julia#53318
aviatesk added a commit to aviatesk/JET.jl that referenced this pull request Feb 14, 2024
Leverages JuliaLang/julia#52233 to use the internal code cache that
comes with the inherent invalidation support.

Still requires:
- JuliaLang/julia#53300 (or JuliaLang/julia#53219)
- JuliaLang/julia#53318
aviatesk added a commit that referenced this pull request Feb 14, 2024
#53318)

For external abstract interpreters like JET, which propagate custom data
interprocedurally, it is essential to inject custom behaviors into where
cached regular/constant inference results are used. Previously, this was
accomplished by overloading both `abstract_call_method` and
`get(::WorldView{CustomView}, ...)` (and `const_prop_call` and
`cached_lookup`), that method was admittedly hacky and should probably
better to be avoided. Moreover, after #52233, doing so has become
infeasible when the external abstract interpreter uses
`InternalCodeCache`.

To address this issue, this commit provides an interface named
`return_cached_result`. This allows external abstract interpreters to
inject custom interprocedural data propagation during abstract
interpretation even when they use `InternalCodeCache`.
fingolfin pushed a commit that referenced this pull request Feb 15, 2024
Two small cleanups to #52233
- Don't use foreign code-instances for native code caching decision
making
- don't return a non-native codeinst for jl_method_compiled

---------

Co-authored-by: Keno Fischer <keno@juliahub.com>
aviatesk added a commit that referenced this pull request Feb 15, 2024
It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After #52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat flawed.
A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in `CC.get(::WorldView{InternalCodeCache})`
  to enable safe redirection of custom data to the native interpreter's
  implementation.
- Prohibiting custom data in the `inferred` field and directing such
  data to be kept in `analysis_results`.
aviatesk added a commit that referenced this pull request Feb 15, 2024
It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After #52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat flawed.
A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in `CC.get(::WorldView{InternalCodeCache})`
  to enable safe redirection of custom data to the native interpreter's
  implementation.
- Prohibiting custom data in the `inferred` field and directing such
  data to be kept in `analysis_results`.
aviatesk added a commit that referenced this pull request Feb 16, 2024
It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After #52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat flawed.
A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in `CC.get(::WorldView{InternalCodeCache})`
  to enable safe redirection of custom data to the native interpreter's
  implementation.
- Prohibiting custom data in the `inferred` field and directing such
  data to be kept in `analysis_results`.
vtjnash pushed a commit that referenced this pull request Feb 17, 2024
)

It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After #52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat
flawed. A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in
`CC.get(::WorldView{InternalCodeCache})` to enable safe redirection of
custom data to the native interpreter's implementation.
- Prohibiting custom data in the `inferred` field and directing such
data to be kept in `analysis_results`.
KristofferC pushed a commit that referenced this pull request Feb 26, 2024
)

It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After #52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat
flawed. A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in
`CC.get(::WorldView{InternalCodeCache})` to enable safe redirection of
custom data to the native interpreter's implementation.
- Prohibiting custom data in the `inferred` field and directing such
data to be kept in `analysis_results`.

(cherry picked from commit 93876c9)
tecosaur pushed a commit to tecosaur/julia that referenced this pull request Mar 4, 2024
Two small cleanups to JuliaLang#52233
- Don't use foreign code-instances for native code caching decision
making
- don't return a non-native codeinst for jl_method_compiled

---------

Co-authored-by: Keno Fischer <keno@juliahub.com>
tecosaur pushed a commit to tecosaur/julia that referenced this pull request Mar 4, 2024
…iaLang#53300)

It has been possible for external abstract interpreters to keep custom
data in `codeinst.inferred` (together /w overloading `inlining_policy`).
After JuliaLang#52233, when such external absint uses
`InternalCodeCache`, this data is passed to `jl_ir_flag_inferred`,
leading to segfaults in assertion builds.

This commit resolves the issue by omitting `jl_ir_flag_inferred` checks
when the `cache_owner` is external. Nonetheless, a better resolution
might be necessary. It suggests that the current design of `code_owner`
and `InternalCodeCache` for the external cache system is somewhat
flawed. A conceivable approach could involve:
- Adding a layer similar to `inlining_policy` in
`CC.get(::WorldView{InternalCodeCache})` to enable safe redirection of
custom data to the native interpreter's implementation.
- Prohibiting custom data in the `inferred` field and directing such
data to be kept in `analysis_results`.
vchuravy added a commit that referenced this pull request Apr 13, 2024
For GPUCompiler we would like to support a native on disk cache of LLVM
IR.
One of the longstanding issues has been the cache invalidation of such
an on disk cache.

With #52233 we now have an integrated cache for the inference results
and we can rely
on `CodeInstance` to be stable across sessions. Due to #52119 we can
also rely on the
`objectid` to be stable. 

My inital thought was to key the native disk cache in GPUCompiler on the
objectid of
the corresponding CodeInstance (+ some compilation parameters).

While discussing this with @rayegun yesterday we noted that having a
CodeInstance with
the same objectid might not be enough provenance. E.g we are not
gurantueed that the
CodeInstance is from the same build artifact and the same precise source
code.

For the package images we are tracking this during loading and validate
all contents
at once, and we keep explicitly track of the provenance chain.

This PR adds a lookup up table where we map from "external_blobs" e.g.
loaded images,
to the corresponding top module of each image, and uses this to
determine the
build_id of the package image.
vchuravy added a commit that referenced this pull request Apr 15, 2024
…reters (#54069)

Partially reverts #49391

PrecompileTools uses the timing infrastructure to snoop on the inference
process.
The reason for #49391 was that this could lead to accidental pollution
of the caches
with foreign results
(timholy/SnoopCompile.jl#338)

After #52233 and especially #53336 we now filter results by cache owner
and
don't try to cache foreign code using the native pipeline.

Motivated by JuliaGPU/GPUCompiler.jl#567 which
demonstrated
that a foreign code instance would not be cached without
PrecompileTools.
KristofferC pushed a commit that referenced this pull request Apr 17, 2024
…reters (#54069)

Partially reverts #49391

PrecompileTools uses the timing infrastructure to snoop on the inference
process.
The reason for #49391 was that this could lead to accidental pollution
of the caches
with foreign results
(timholy/SnoopCompile.jl#338)

After #52233 and especially #53336 we now filter results by cache owner
and
don't try to cache foreign code using the native pipeline.

Motivated by JuliaGPU/GPUCompiler.jl#567 which
demonstrated
that a foreign code instance would not be cached without
PrecompileTools.

(cherry picked from commit c0611e8)
vchuravy added a commit that referenced this pull request Apr 19, 2024
For GPUCompiler we would like to support a native on disk cache of LLVM
IR.
One of the longstanding issues has been the cache invalidation of such
an on disk cache.

With #52233 we now have an integrated cache for the inference results
and we can rely
on `CodeInstance` to be stable across sessions. Due to #52119 we can
also rely on the
`objectid` to be stable.

My inital thought was to key the native disk cache in GPUCompiler on the
objectid of
the corresponding CodeInstance (+ some compilation parameters).

While discussing this with @rayegun yesterday we noted that having a
CodeInstance with
the same objectid might not be enough provenance. E.g we are not
gurantueed that the
CodeInstance is from the same build artifact and the same precise source
code.

For the package images we are tracking this during loading and validate
all contents
at once, and we keep explicitly track of the provenance chain.

This PR adds a lookup up table where we map from "external_blobs" e.g.
loaded images,
to the corresponding top module of each image, and uses this to
determine the
build_id of the package image.

(cherry picked from commit d47cbf6)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants