MerTools
Convenience functions for working with merMod objects from lme4
Install / Use
/learn @jknowles/MerToolsREADME
merTools
A package for getting the most out of large multilevel models in R
by Jared E. Knowles and Carl Frederick
Working with generalized linear mixed models (GLMM) and linear mixed
models (LMM) has become increasingly easy with advances in the lme4
package. As we have found ourselves using these models more and more
within our work, we, the authors, have developed a set of tools for
simplifying and speeding up common tasks for interacting with merMod
objects from lme4. This package provides those tools.
Installation
# development version
library(devtools)
install_github("jknowles/merTools")
# CRAN version
install.packages("merTools")
Recent Updates
merTools 0.6.2 (Early 2024)
- Maintenance release to fix minor issues with function documentation
- Fix #130 by avoiding conflict with
vcovin themerDerivpackage - Upgrade package test infrastructure to 3e testthat specification
merTools 0.6.1 (Spring 2023)
- Maintenance release to keep package listed on CRAN
- Fix a small bug where parallel code path is run twice (#126)
- Update plotting functions to avoid deprecated
aes_string()calls (#127) - Fix (#115) in description
- Speed up PI using @bbolker pull request (#120)
- Updated package maintainer contact information
merTools 0.5.0
New Features
subBootnow works withglmerModobjects as wellreMarginsa new function that allows the user to marginalize the prediction over breaks in the distribution of random effect distributions, see?reMarginsand the newreMarginsvignette (closes #73)
Bug fixes
- Fixed an issue where known convergence errors were issuing warnings and causing the test suite to not work
- Fixed an issue where models with a random slope, no intercept, and no fixed term were unable to be predicted (#101)
- Fixed an issue with shinyMer not working with substantive fixed effects (#93)
merTools 0.4.1
New Features
- Standard errors reported by
merModListfunctions now apply the Rubin correction for multiple imputation
Bug fixes
- Contribution by Alex Whitworth (@alexWhitworth) adding error checking to plotting functions
Shiny App and Demo
The easiest way to demo the features of this application is to use the bundled Shiny application which launches a number of the metrics here to aide in exploring the model. To do this:
library(merTools)
m1 <- lmer(y ~ service + lectage + studage + (1|d) + (1|s), data=InstEval)
shinyMer(m1, simData = InstEval[1:100, ]) # just try the first 100 rows of data

On the first tab, the function presents the prediction intervals for the
data selected by user which are calculated using the predictInterval
function within the package. This function calculates prediction
intervals quickly by sampling from the simulated distribution of the
fixed effect and random effect terms and combining these simulated
estimates to produce a distribution of predictions for each observation.
This allows prediction intervals to be generated from very large models
where the use of bootMer would not be feasible computationally.

On the next tab the distribution of the fixed effect and group-level
effects is depicted on confidence interval plots. These are useful for
diagnostics and provide a way to inspect the relative magnitudes of
various parameters. This tab makes use of four related functions in
merTools: FEsim, plotFEsim, REsim and plotREsim which are
available to be used on their own as well.

On the third tab are some convenient ways to show the influence or
magnitude of effects by leveraging the power of predictInterval. For
each case, up to 12, in the selected data type, the user can view the
impact of changing either one of the fixed effect or one of the grouping
level terms. Using the REimpact function, each case is simulated with
the model’s prediction if all else was held equal, but the observation
was moved through the distribution of the fixed effect or the random
effect term. This is plotted on the scale of the dependent variable,
which allows the user to compare the magnitude of effects across
variables, and also between models on the same data.
Predicting
Standard prediction looks like so.
predict(m1, newdata = InstEval[1:10, ])
#> 1 2 3 4 5 6 7 8
#> 3.146337 3.165212 3.398499 3.114249 3.320686 3.252670 4.180897 3.845219
#> 9 10
#> 3.779337 3.331013
With predictInterval we obtain predictions that are more like the
standard objects produced by lm and glm:
predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 500, level = 0.9,
stat = 'median')
#> fit upr lwr
#> 1 3.215698 5.302545 1.4367495
#> 2 3.155941 5.327796 1.2210140
#> 3 3.374129 5.287901 1.4875231
#> 4 3.101672 5.183841 0.9248584
#> 5 3.299367 5.298370 1.3287058
#> 6 3.147238 5.368311 1.1132248
#> 7 4.155194 6.273147 2.2167207
#> 8 3.873493 5.705669 1.9152401
#> 9 3.740978 5.737517 2.0454222
#> 10 3.291242 5.297614 1.2375007
Note that predictInterval is slower because it is computing
simulations. It can also return all of the simulated yhat values as an
attribute to the predict object itself.
predictInterval uses the sim function from the arm package heavily
to draw the distributions of the parameters of the model. It then
combines these simulated values to create a distribution of the yhat
for each observation.
Inspecting the Prediction Components
We can also explore the components of the prediction interval by asking
predictInterval to return specific components of the prediction
interval.
predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 200, level = 0.9,
stat = 'median', which = "all")
#> effect fit upr lwr obs
#> 1 combined 3.35554348 5.217964 1.615782 1
#> 2 combined 3.21487934 5.327824 1.114338 2
#> 3 combined 3.44493242 5.474256 1.809136 3
#> 4 combined 3.24123655 4.838427 1.272174 4
#> 5 combined 3.20539661 5.367651 1.068128 5
#> 6 combined 3.54335144 5.481756 1.585809 6
#> 7 combined 4.23212790 6.267669 2.284923 7
#> 8 combined 4.05055116 5.684968 1.931558 8
#> 9 combined 3.84266853 5.492163 2.091312 9
#> 10 combined 3.24121727 5.183680 1.196101 10
#> 11 s -0.02342248 1.948494 -1.691035 1
#> 12 s 0.04148408 2.091467 -1.782386 2
#> 13 s 0.04477028 2.087629 -2.144621 3
#> 14 s 0.26160482 2.114509 -1.733429 4
#> 15 s -0.10803386 1.714535 -1.982283 5
#> 16 s -0.04962613 1.916212 -1.909187 6
#> 17 s 0.24916111 2.001528 -1.628554 7
#> 18 s 0.19640074 2.070513 -1.473660 8
#> 19 s 0.27031215 2.119763 -1.643120 9
#> 20 s 0.13772544 2.313012 -1.855489 10
#> 21 d -0.32196201 1.357316 -2.397083 1
#> 22 d -0.29691477 1.422141 -2.662141 2
#> 23 d 0.24828667 1.782181 -1.987563 3
#> 24 d -0.37893052 1.471225 -2.350781 4
#> 25 d 0.02142086 2.172075 -2.148417 5
#> 26 d 0.07926221 2.003462 -1.677765 6
#> 27 d 0.76480967 2.767889 -1.274501 7
#> 28 d 0.08757337 2.374201 -1.958689 8
#> 29 d 0.25289032 2.083732 -1.376630 9
#> 30 d -0.17775160 1.601744 -2.115104 10
#> 31 fixed 3.16750528 5.010517 1.371678 1
#> 32 fixed 3.21493166 5.246672 1.074857 2
#> 33 fixed 3.36233628 5.581696 1.474776 3
#> 34 fixed 3.17926915 5.107315 1.621278 4
#> 35 fixed 3.16562882 5.136197 1.156010 5
#> 36 fixed 3.15944014 5.114967 1.506315 6
#> 37 fixed 3.32101367 5.149819 1.407884 7
#> 38 fixed 3.34020282 5.189215 1.651446 8
#> 39 fixed 3.17901802 5.000429 1.132874 9
#> 40 fixed 3.41100236 5.207451 1.555844 10
This can lead to some useful plotting:
library(ggplot2)
#> Warning: package 'ggplot2' was built under R version 4.3.2
plotdf <- predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 2000,
level = 0.9, stat = 'median', which = "all",
include.resid.var = FALSE)
plotdfb <- predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 2000,
level = 0.9, stat = 'median', which = "all",
include.resid.var = TRUE)
plotdf <- dplyr::bind_rows(plotdf, plotdfb, .id = "residVar")
plotdf$residVar <- ifelse(plotdf$residVar == 1, "No Model Variance",
"Model Variance")
ggplot(plotdf, aes(x = obs, y = fit, ymin = lwr, ymax = upr)) +
geom_pointrange() +
geom_hline(yintercept = 0, color = I("red"), size = 1.1) +
scale_x_continuous(breaks = c(1, 10)) +
facet_grid(residVar~effect) + theme_bw()
#> Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
#> ℹ Please use `linewidth` instead.
#> This warning is displayed once every 8 hours.
#> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
#> generated.
<!-- -->
We can also investigate the makeup of the prediction for each observation.
