Replies: 3 comments 3 replies
-
The trick I've done for this in the past is to enforce symmetry for all the covariance matrices with Σ = Symmetric(diagm(σ) * Ρ * diagm(σ)) You have another covariance matrix there too, and it's hard to tell which is causing the issue without line numbers, but you can enforce symmetry there with the same principle. |
Beta Was this translation helpful? Give feedback.
-
Hi, I just come across this issue, as well. I've noticed two things:
The model below is just a toy model I've been playing around with to try and better understand how to use Julia and Turing. The intent was to estimate the means, standard deviations, and correlations from a distribution of known values. using Random, Distributions, Turing
using FillArrays, LinearAlgebra
Random.seed!(13)
#Make the data
cols = 3
N = 50
mu = rand(Normal(0, 100), cols)
sd = rand(InverseGamma(3, 10), cols)
cor = rand(LKJ(cols, 1), 1)[1]
cov = sd * sd' .* cor
m = convert(Matrix, rand(MvNormal(mu, cov), N)')
#Use data to estimate priors
mu_p = [mean(col) for col in eachcol(m)]
sd_p = [std(col) for col in eachcol(m)]
@model function correlation_model(data, mu, sd)
N = size(data)[1]
C = size(data)[2]
μ ~ MvNormal(mu, Diagonal(sd))
#This is not efficient
σ = fill(undef, length(sd))
for i in 1:C
σ[i] ~ Exponential(1 / sd[i])
end
#PROBLEM: Sometimes generates a domain error when rho is -1
#PROBLEM: Sometimes generates a matrix that is not positive definite
#Note: LinearAlgebra.checkpositivedefinite()
ρ ~ LKJ(length(σ), 1)
Σ = σ * σ' .* ρ
#This is not efficient
for i in 1:N
data[i,:] ~ MvNormal(μ, Σ)
end
end
model = correlation_model(m, mu_p, sd_p)
chain = sample(model, Turing.NUTS(500, 0.65), 1000) I'm really new to Julia (trying to translate my Stan skills... or lack thereof), so I'm very open to the idea that I've done something inappropriate, but hopefully not. |
Beta Was this translation helpful? Give feedback.
-
Hey @jan-glx, I just wanted to bump your discussion since it appears the I'm personally still having problems with it and I'm suspecting it has to do with a difference between There are ways to catch these errors and apply the infinite log density yourself ( |
Beta Was this translation helpful? Give feedback.
-
I am trying to do inference in the following simple model:
but it fails with
PosDefException: matrix is not Hermitian; Cholesky factorization failed.
Stan
offerslkj_corr_cholesky
andmulti_normal_cholesky
to prevent this problem and enable more efficient sampling. Is something like this possible withTuring.jl
as well?Let me know if there is a better place to ask this!
Beta Was this translation helpful? Give feedback.
All reactions