-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
codegen: start to remove the ability to call back into inference #54655
Conversation
3622407
to
4d4bc60
Compare
This is preparation for making codegen purely a transform of CodeInfo -> Module* without involving any of the caches (at the C++ level), so that this can instead be driven by caches managed entirely at the Julia inference level. Removes the test for code_llvm lookup behavior, since codegen no longer does lookup, so that is no longer relevant to test.
…d of jl_type_infer Removes last remaining use of SOURCE_MODE_FORCE_SOURCE_UNCACHED, allowing it to be eliminated.
4d4bc60
to
a4cc6c8
Compare
fargs[0] = jl_get_global(CC, jl_symbol("typeinf_code")); | ||
fargs[2] = (jl_value_t*)mi; | ||
fargs[3] = jl_true; | ||
ci = (jl_code_info_t*)jl_apply(fargs, 4); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this not calling jl_typeinf_func
? Or typeinf_ext_toplevel
instead of constructing the NativeInterpreter manually?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those do not return source code, while the point of this function is to implement a basic version of code_typed
, for debugging in gdb when things are looking broken
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be fine for GPUCompiler. If you are planning to remove lookup
from jl_ci_cache_lookup
and thus jl_create_native
we should talk about a strategy for GPUCompiler.
Currently GPUCompiler does: https://github.com/JuliaGPU/GPUCompiler.jl/blob/8b513be9e2230fe0dd1905b805e25fa049b24d1d/src/jlgen.jl#L603-L617
I did run test locally for GPUCompiler and all passed.
cc: @maleadt
The plan is to require a CodeInstance in that IR position instead of a MethodInstance, so that the lookup is no longer necessary |
Continuing the development in #53219, according to the plan in https://hackmd.io/@vtjnash/codeinstances, this further separates the meaning behind CodeInfo and CodeInstance, such that CodeInstance can only be used as a call target, and cannot be used for code generation, while CodeInfo can only be used to generate code (or for reflection on what code would be generated), and cannot be used as a call target. Basically, the eventual idea is that CodeInfo will only show up now as an input (e.g. for doing inlining or codegen) and is ephemeral, while a CodeInstance is what shows up in a cache (e.g. as a callable object).