-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add a type assert to logging to prevent Base.require invalidation #48810
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
KristofferC
added
compiler:latency
Compiler latency
backport 1.8
Change should be backported to release-1.8
backport 1.9
Change should be backported to release-1.9
labels
Feb 27, 2023
KristofferC
pushed a commit
to KristofferC/ModelBaseEcon.jl
that referenced
this pull request
Feb 27, 2023
ForwardDiff quite aggressively specializes most of its functions on the concrete input function type. This gives a slight performance improvement but it also means that a significant chunk of code has to be compiled for every call to `ForwardDiff` with a new function. Previously, for every equation in a model we would call `ForwardDiff.gradient` with the julia function corresponding to that equation. This would then compile the ForwardDiff functions for all of these julia functions. Looking at the specializations generated by a model, we see: ```julia GC = ForwardDiff.GradientConfig{FRBUS_VAR.MyTag, Float64, 4, Vector{ForwardDiff.Dual{FRBUS_VAR.MyTag, Float64, 4}}} MethodInstance for ForwardDiff.vector_mode_dual_eval!(::FRBUS_VAR.EquationEvaluator{:resid_515}, ::GC, ::Vector{Float64}) MethodInstance for ForwardDiff.vector_mode_gradient!(::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, ::FRBUS_VAR.EquationEvaluator{:resid_515}, ::Vector{Float64}, ::GC) MethodInstance for ForwardDiff.vector_mode_dual_eval!(::FRBUS_VAR.EquationEvaluator{:resid_516}, ::GC, ::Vector{Float64}) MethodInstance for ForwardDiff.vector_mode_gradient!(::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, ::FRBUS_VAR.EquationEvaluator{:resid_516}, ::Vector{Float64}, ::GC) ``` which are all identical methods compiled for different equations. In this PR, we instead "hide" all the concrete functions for every equation between a common "wrapper functions". This means that only one specialization of the ForwardDiff functions gets compiled. Using the following benchmark script: ```julia unique!(push!(LOAD_PATH, realpath("./models"))) using ModelBaseEcon using Random # See JuliaLang/julia#48810 @time using FRBUS_VAR m = FRBUS_VAR.model nrows = 1 + m.maxlag + m.maxlead ncols = length(m.allvars) pt = zeros(nrows, ncols); @time @eval eval_RJ(pt, m); using BenchmarkTools @Btime eval_RJ(pt, m); ``` This PR has the following changes: - Package load time: 0.078s -> 0.05s - First call `eval_RJ`: 11.47s -> 4.97s - Runtime performance of `eval_RJ`: 550μs -> 590μs So there seems to be about a 10% runtime performance in the `eval_RJ` call but the latency is drastically reduced.
KristofferC
pushed a commit
to KristofferC/ModelBaseEcon.jl
that referenced
this pull request
Feb 27, 2023
…tion Currently, every model gets its own ForwardDiff tag which means that every model also have a unique type of their dual numbers. This causes every function called with dual numbers to have to be recompiled for every model. In this PR, we define a shared tag in ModelBasedEcon that all models use. This means that we can push the precompile generation for many functions from the model into ModelBaseEcon itself which changes the cost of them from O(1) to O(n_models). This PR also corrects a mismatch in the `precompile` call and the call to `ForwardDiff`. In the precompile calls `MyTag` was used as the type to precompile for which means that the calls to `GradientConfig` should have used `MyTag()` (so that the type of the tag was `MyTag`.) Now, when `MyTag` was used to the `GradientConfig` call the type of it is actually `DataType` which means that the types in the `precompile` call was different compared to the types actually encountered at runtime. Using the following benchmark script: ```julia unique!(push!(LOAD_PATH, realpath("./models"))) # hide @time using ModelBaseEcon using Random # See JuliaLang/julia#48810 @time using FRBUS_VAR m = FRBUS_VAR.model nrows = 1 + m.maxlag + m.maxlead ncols = length(m.allvars) pt = zeros(nrows, ncols); @time @eval eval_RJ(pt, m); using BenchmarkTools @Btime eval_RJ(pt, m); ``` this PR has the following changes: - Loading ModelBaseEcon: 0.641551s -> 0.645943s - Loading model 0.053s -> 0.032s - First call `eval_RJ`: 5.50s -> 0.64s - Benchmark `eval_RJ`: 597.966μs -> 573.923μs
aviatesk
approved these changes
Feb 28, 2023
KristofferC
pushed a commit
to KristofferC/ModelBaseEcon.jl
that referenced
this pull request
Mar 13, 2023
ForwardDiff quite aggressively specializes most of its functions on the concrete input function type. This gives a slight performance improvement but it also means that a significant chunk of code has to be compiled for every call to `ForwardDiff` with a new function. Previously, for every equation in a model we would call `ForwardDiff.gradient` with the julia function corresponding to that equation. This would then compile the ForwardDiff functions for all of these julia functions. Looking at the specializations generated by a model, we see: ```julia GC = ForwardDiff.GradientConfig{FRBUS_VAR.MyTag, Float64, 4, Vector{ForwardDiff.Dual{FRBUS_VAR.MyTag, Float64, 4}}} MethodInstance for ForwardDiff.vector_mode_dual_eval!(::FRBUS_VAR.EquationEvaluator{:resid_515}, ::GC, ::Vector{Float64}) MethodInstance for ForwardDiff.vector_mode_gradient!(::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, ::FRBUS_VAR.EquationEvaluator{:resid_515}, ::Vector{Float64}, ::GC) MethodInstance for ForwardDiff.vector_mode_dual_eval!(::FRBUS_VAR.EquationEvaluator{:resid_516}, ::GC, ::Vector{Float64}) MethodInstance for ForwardDiff.vector_mode_gradient!(::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, ::FRBUS_VAR.EquationEvaluator{:resid_516}, ::Vector{Float64}, ::GC) ``` which are all identical methods compiled for different equations. In this PR, we instead "hide" all the concrete functions for every equation between a common "wrapper functions". This means that only one specialization of the ForwardDiff functions gets compiled. Using the following benchmark script: ```julia unique!(push!(LOAD_PATH, realpath("./models"))) using ModelBaseEcon using Random # See JuliaLang/julia#48810 @time using FRBUS_VAR m = FRBUS_VAR.model nrows = 1 + m.maxlag + m.maxlead ncols = length(m.allvars) pt = zeros(nrows, ncols); @time @eval eval_RJ(pt, m); using BenchmarkTools @Btime eval_RJ(pt, m); ``` This PR has the following changes: - Package load time: 0.078s -> 0.05s - First call `eval_RJ`: 11.47s -> 4.97s - Runtime performance of `eval_RJ`: 550μs -> 590μs So there seems to be about a 10% runtime performance in the `eval_RJ` call but the latency is drastically reduced.
KristofferC
pushed a commit
to KristofferC/ModelBaseEcon.jl
that referenced
this pull request
Mar 13, 2023
…tion Currently, every model gets its own ForwardDiff tag which means that every model also have a unique type of their dual numbers. This causes every function called with dual numbers to have to be recompiled for every model. In this PR, we define a shared tag in ModelBasedEcon that all models use. This means that we can push the precompile generation for many functions from the model into ModelBaseEcon itself which changes the cost of them from O(1) to O(n_models). This PR also corrects a mismatch in the `precompile` call and the call to `ForwardDiff`. In the precompile calls `MyTag` was used as the type to precompile for which means that the calls to `GradientConfig` should have used `MyTag()` (so that the type of the tag was `MyTag`.) Now, when `MyTag` was used to the `GradientConfig` call the type of it is actually `DataType` which means that the types in the `precompile` call was different compared to the types actually encountered at runtime. Using the following benchmark script: ```julia unique!(push!(LOAD_PATH, realpath("./models"))) # hide @time using ModelBaseEcon using Random # See JuliaLang/julia#48810 @time using FRBUS_VAR m = FRBUS_VAR.model nrows = 1 + m.maxlag + m.maxlead ncols = length(m.allvars) pt = zeros(nrows, ncols); @time @eval eval_RJ(pt, m); using BenchmarkTools @Btime eval_RJ(pt, m); ``` this PR has the following changes: - Loading ModelBaseEcon: 0.641551s -> 0.645943s - Loading model 0.053s -> 0.032s - First call `eval_RJ`: 5.50s -> 0.64s - Benchmark `eval_RJ`: 597.966μs -> 573.923μs
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Before:
After: