-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
including p-values in customized output omits random effects from mixed models #494
Comments
Thanks for reporting! I don’t use mixed-effects models often, so these are very useful test cases. Should be fixed in 0959b02 Note that user-input in glue strings is totally arbitrary, so my current checks won’t catch every possibility, but it should be a considerable improvement. library(lme4)
library(modelsummary)
model <- lmer(Sepal.Width ~ Petal.Length + (1|Species), data = iris)
modelsummary(model,
"markdown",
estimate = "{estimate} [{conf.low}, {conf.high}] {p.value}",
statistic = NULL)
|
Nice, thanks! |
I'm re-opening it, because it doesn't fully work when CIs are only partially available, see: m1 <- lme4::lmer(Sepal.Width ~ Petal.Length + (1|Species), data = iris)
m2 <- lme4::lmer(Sepal.Width ~ Petal.Length + (1 + Petal.Length |Species), data = iris)
m3 <- lme4::lmer(Sepal.Width ~ Petal.Length + Petal.Width + (1 + Petal.Length |Species), data = iris)
#> boundary (singular) fit: see help('isSingular')
parameters::compare_models(m1, m2, m3, effects = "all")
#> Your model may suffer from singularity (see '?lme4::isSingular' and
#> '?performance::check_singularity').
#> Some of the standard errors and confidence intervals of the random
#> effects parameters are probably not meaningful!
#> Parameter | m1 | m2 | m3
#> -------------------------------------------------------------------------------------------------
#> (Intercept) | 2.00 (0.89, 3.11) | 1.99 ( 0.89, 3.10) | 1.76 (0.44, 3.08)
#> Petal Length | 0.28 (0.16, 0.40) | 0.29 ( 0.16, 0.41) | 0.16 (0.03, 0.29)
#> SD (Intercept) | 0.89 (0.33, 2.43) | 0.88 ( 0.27, 2.87) | 1.10 (0.36, 3.30)
#> SD (Observations) | 0.32 (0.28, 0.35) | 0.32 ( 0.28, 0.35) | 0.30 (0.26, 0.33)
#> SD (Petal.Length) | | 0.02 ( 0.00, 1.76e+07) | 0.03 (0.00, 2.33e+05)
#> Cor (Intercept~Petal.Length) | | 0.19 (-1.00, 1.00) | 1.00 ( , 1.00)
#> Petal Width | | | 0.61 (0.34, 0.88)
#> -------------------------------------------------------------------------------------------------
#> Observations | 150 | 150 | 150 Created on 2022-06-08 by the reprex package (v2.0.1) In this case, the following code produces following table: modelsummary::modelsummary(model, estimate = "{estimate} [{conf.low}, {conf.high}]",
shape = group + term + statistic ~ model,
statistic = NULL, fmt = 2) Note the message from |
Is this working as you expected with the latest commits? library(modelsummary)
models <- list(
lme4::lmer(Sepal.Width ~ Petal.Length + (1|Species), data = iris),
lme4::lmer(Sepal.Width ~ Petal.Length + (1 + Petal.Length |Species), data = iris),
lme4::lmer(Sepal.Width ~ Petal.Length + Petal.Width + (1 + Petal.Length |Species), data = iris))
modelsummary(
models,
estimate = "{estimate} [{conf.low}, {conf.high}]",
statistic = NULL,
gof_map = NA,
output = "markdown")
Created on 2022-06-08 by the reprex package (v2.0.1) |
Preparing a Ripley-induced emergency release, so closing all the issues which seem (to me) complete. Feel free to re-open or keep the discussion going if you see a need. |
Current output:
It would be great to keep the random effects, and just set blank fields for p-values in the random effects rows.
The text was updated successfully, but these errors were encountered: