Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAP estimate doesn't take priors into account? #153

Closed
ClaudMor opened this issue Dec 22, 2020 · 5 comments
Closed

MAP estimate doesn't take priors into account? #153

ClaudMor opened this issue Dec 22, 2020 · 5 comments

Comments

@ClaudMor
Copy link

Hello,

I anticipate that this question is probably due to me being very new to the field.
My goal is to perform model parameter estimation, where the optimizer should also output parameter confidence intervals.
I tried using bayesian inference ( ABC) and it works, the problem is that my model has too many parameters for ABC to find a good convergence.
I noticed that, if one doesn't need confidence intervals, then optim ( BFGS) is able to properly fit all the parameters ( about 65 in our case).
So I read that it is possible to define a cost function with priors here, so I tried to replicate the tutorial about finding optimal parameter values for an ODE, but I also tried to set the priors parameter. So the tutorial code is ( I put it here for ease of reference):

using Optim
using RecursiveArrayTools
using DiffEqParamEstim
using OrdinaryDiffEq
using Distributions

function f(du,u,p,t)
  du[1] = dx = p[1]*u[1] - u[1]*u[2]
  du[2] = dy = -3*u[2] + u[1]*u[2]
end

u0 = [1.0;1.0]
tspan = (0.0,10.0)
p = [1.5]
prob = ODEProblem(f,u0,tspan,p)


sol = solve(prob,Tsit5())
t = collect(range(0,stop=10,length=200))
using RecursiveArrayTools # for VectorOfArray
randomized = VectorOfArray([(sol(t[i]) + .01randn(2)) for i in 1:length(t)])
data = convert(Array,randomized)


cost_function = build_loss_objective(prob,Tsit5(),L2Loss(t,data);
                                     maxiters=10000,verbose=false)
                                     
result = optimize(cost_function, [1.42], BFGS())

And it converges to 1.4999981995513092.

If insert a prior ( here explicitly chosen not to include the calibrated value):

cost_function = build_loss_objective(prob,Tsit5(),L2Loss(t,data); priors = [Uniform(2.0, 3.0)],
                                     maxiters=10000,verbose=false)
result = optimize(cost_function, [1.42], BFGS())

It converges to the same value.
Maybe this is not the correct way to do it. How should one use such cost_function with priors?

Thank you very much

@ChrisRackauckas
Copy link
Member

That should be correct. Is it the exact same value?

Moving to DiffEqParamEstim.jl

@ChrisRackauckas ChrisRackauckas transferred this issue from SciML/DifferentialEquations.jl Dec 22, 2020
@ClaudMor
Copy link
Author

ClaudMor commented Dec 22, 2020

yes, it is exactly the same value.

I was wondering how could it converge to a value that belongs to the domain where the prior is zero.

Also, since I know the priors for all the parameters and i have the data to fit them against, would it be possible to get from Optim not only a calibrated value, but an entire posterior distribution, or at least confidence intervals, like ABC would do?

@ChrisRackauckas
Copy link
Member

@Vaibhavdixit02 could you take a look?

@Vaibhavdixit02
Copy link
Member

@claudio20497 it should work now, after #155.

Also, since I know the priors for all the parameters and i have the data to fit them against, would it be possible to get from Optim not only a calibrated value, but an entire posterior distribution, or at least confidence intervals, like ABC would do?

What you can do here is use Optim to get the optimum parameters and then run NUTS using that as the initial value to get the posterior

@ClaudMor
Copy link
Author

ClaudMor commented Jan 1, 2021

Hello @Vaibhavdixit02 ,

Thanks for your reply.

Reading the DiffEqBayes documentation, I see that NUTS ( which is implemented by Turing.jl , and thus would require to use turing_inference) doesn't seem to have an argument that lets me set initial parameter values.

On the other hand, DynamicNUTS has the argument initial, but is seems to have the problem reported in this issue.

So the best option is to wait for the above issue to be solved, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants