SkillAgentSearch skills...

CVtreeMLE

:deciduous_tree: :dart: Cross Validated Decision Trees with Targeted Maximum Likelihood Estimation

Install / Use

/learn @blind-contours/CVtreeMLE

README

<!-- README.md is generated from README.Rmd. Please edit that file -->

CVtreeMLE <img src="man/figures/CVtreeMLE_sticker.png" style="float:right; height:200px;">

<!-- badges: start -->

R-CMD-check Coverage
Status CRAN CRAN
downloads CRAN total
downloads Project Status: Active – The project has reached a stable, usable
state and is being actively
developed. MIT
license

<!-- [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4070042.svg)](https://doi.org/10.5281/zenodo.4070042) --> <!-- [![DOI](https://joss.theoj.org/papers/10.21105/joss.02447/status.svg)](https://doi.org/10.21105/joss.02447) -->

Codecov test
coverage

<!-- badges: end -->

Discovery of Critical Thresholds in Mixed Exposures and Estimation of Policy Intervention Effects using Targeted Learning

Author: David McCoy


What is CVtreeMLE?

This package operationalizes the methodology presented here:

https://arxiv.org/abs/2302.07976

People often encounter multiple simultaneous exposures (e.g. several drugs or pollutants). Policymakers are interested in setting safe limits, interdictions, or recommended dosage combinations based on a combination of thresholds, one per exposure. Setting these thresholds is difficult because all relevant interactions between exposures must be accounted for. Previous statistical methods have used parametric estimators which do not directly address the question of safe exposure limits, rely on unrealistic assumptions, and do not result in a threshold based statistical quantity that is directly relevant to policy regulators.

Here we present an estimator that a) identifies thresholds that minimize/maximize the expected outcome controlling for covariates and other exposures; and which b) efficiently estimates a policy intervention which compares the expected outcome if everyone was forced to these safe levels compared to the observed outcome under observed exposure distribution.

This is done by using cross-validation where in training folds of the data, a custom g-computation tree-based search algorithm finds the minimizing region, and an estimation sample is used to estimate the policy intervention using targeted maximum likelihood estimation.

Inputs and Outputs

This package takes in a mixed exposure, covariates, outcome, super learner stacks of learners if determined (if not default are used), number of folds, minimum observations in a region, if the desired region is minimizer or maximizer and parallelization parameters.

The output are k-fold specific results for the region found in each fold with valid inference, a pooled estimate of the overall oracle parameter across all folds and pooled exposure sets if the region has some inconsistency across the folds.


Installation

Note: Because CVtreeMLE package (currently) depends on sl3 that allows ensemble machine learning to be used for nuisance parameter estimation and sl3 is not on CRAN the CVtreeMLE package is not available on CRAN and must be downloaded here.

There are many dependencies for CVtreeMLE so it’s easier to break up installation of the various packages to ensure proper installation.

CVtreeMLE uses the sl3 package to build ensemble machine learners for each nuisance parameter.

Install sl3 on devel:

remotes::install_github("tlverse/sl3@devel")

Make sure sl3 installs correctly then install CVtreeMLE

remotes::install_github("blind-contours/CVtreeMLE@main")

Example

First load the package and other packages needed

library(CVtreeMLE)
library(sl3)
library(dplyr)
library(kableExtra)
library(ggplot2)
seed <- 98484
set.seed(seed)

To illustrate how CVtreeMLE may be used to find and estimate a region that, if intervened on would lead to the biggest reduction in an outcome we use synthetic data from the National Institute of Environmental Health:

National Institute of Environmental Health Data

The 2015 NIEHS Mixtures Workshop was developed to determine if new mixture methods detect ground-truth interactions built into the simulated data. In this way we can simultaneously show CVtreeMLE output, interpretation and validity.

For detailed information on this simulated data please see:

https://github.com/niehs-prime/2015-NIEHS-MIxtures-Workshop

niehs_data <- NIEHS_data_1

head(niehs_data) %>%
  kableExtra::kbl(caption = "NIEHS Data") %>%
  kableExtra::kable_classic(full_width = FALSE, html_font = "Cambria")
<table class=" lightable-classic" style="font-family: Cambria; width: auto !important; margin-left: auto; margin-right: auto;"> <caption> NIEHS Data </caption> <thead> <tr> <th style="text-align:right;"> obs </th> <th style="text-align:right;"> Y </th> <th style="text-align:right;"> X1 </th> <th style="text-align:right;"> X2 </th> <th style="text-align:right;"> X3 </th> <th style="text-align:right;"> X4 </th> <th style="text-align:right;"> X5 </th> <th style="text-align:right;"> X6 </th> <th style="text-align:right;"> X7 </th> <th style="text-align:right;"> Z </th> </tr> </thead> <tbody> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 7.534686 </td> <td style="text-align:right;"> 0.4157066 </td> <td style="text-align:right;"> 0.5308077 </td> <td style="text-align:right;"> 0.2223965 </td> <td style="text-align:right;"> 1.1592634 </td> <td style="text-align:right;"> 2.4577556 </td> <td style="text-align:right;"> 0.9438601 </td> <td style="text-align:right;"> 1.8714406 </td> <td style="text-align:right;"> 0 </td> </tr> <tr> <td style="text-align:right;"> 2 </td> <td style="text-align:right;"> 19.611934 </td> <td style="text-align:right;"> 0.5293572 </td> <td style="text-align:right;"> 0.9339570 </td> <td style="text-align:right;"> 1.1210595 </td> <td style="text-align:right;"> 1.3350074 </td> <td style="text-align:right;"> 0.3096883 </td> <td style="text-align:right;"> 0.5190970 </td> <td style="text-align:right;"> 0.2418065 </td> <td style="text-align:right;"> 0 </td> </tr> <tr> <td style="text-align:right;"> 3 </td> <td style="text-align:right;"> 12.664050 </td> <td style="text-align:right;"> 0.4849759 </td> <td style="text-align:right;"> 0.7210988 </td> <td style="text-align:right;"> 0.4629027 </td> <td style="text-align:right;"> 1.0334138 </td> <td style="text-align:right;"> 0.9492810 </td> <td style="text-align:right;"> 0.3664090 </td> <td style="text-align:right;"> 0.3502445 </td> <td style="text-align:right;"> 0 </td> </tr> <tr> <td style="text-align:right;"> 4 </td> <td style="text-align:right;"> 15.600288 </td> <td style="text-align:right;"> 0.8275456 </td> <td style="text-align:right;"> 1.0457137 </td> <td style="text-align:right;"> 0.9699040 </td> <td style="text-align:right;"> 0.9045099 </td> <td style="text-align:right;"> 0.9107914 </td> <td style="text-align:right;"> 0.4299847 </td> <td style="text-align:right;"> 1.0007901 </td> <td style="text-align:right;"> 0 </td> </tr> <tr> <td style="text-align:right;"> 5 </td> <td style="text-align:right;"> 18.606498 </td> <td style="text-align:right;"> 0.5190363 </td> <td style="text-align:right;"> 0.7802400 </td> <td style="text-align:right;"> 0.6142188 </td> <td style="text-align:right;"> 0.3729743 </td> <td style="text-align:right;"> 0.5038126 </td> <td style="text-align:right;"> 0.3575472 </td> <td style="text-align:right;"> 0.5906156 </td> <td style="text-align:right;"> 0 </td> </tr> <tr> <td style="text-align:right;"> 6 </td> <td style="text-align:right;"> 18.525890 </td> <td style="text-align:right;"> 0.4009491 </td> <td style="text-align:right;"> 0.8639886 </td> <td style="text-align:right;"> 0.5501847 </td> <td style="text-align:right;"> 0.9011016 </td> <td style="text-align:right;"> 1.2907615 </td> <td style="text-align:right;"> 0.7990418 </td> <td style="text-align:right;"> 1.5097039 </td> <td style="text-align:right;"> 0 </td> </tr> </tbody> </table>

Briefly, this synthetic data can be considered the results of a prospective cohort epidemiologic study. The outcome cannot cause the exposures (as might occur in a cross-sectional study). Correlations between exposure variables can be thought of as caused by common sources or modes of exposure. The nuisance variable Z can be assumed to be a potential confounder and not a collider. There are 7 exposures which have a complicated dependency structure. $X_3$ and $X_6$ do not have an impact on the outcome.

One issue is that many machine learning algorithms will fail given only 1 variable passed as a feature so let’s add some other covariates.

niehs_data$Z2 <- rbinom(nrow(niehs_data),
  size = 1,
  prob = 0.3
)

niehs_data$Z3 <- rbinom(nrow(niehs_data),
  size = 1,
  prob = 0.1
)

Run CVtreeMLE

ptm <- proc.time()

# Convert continuous X variables to their corresponding deciles for example
niehs_data <- niehs_data %>%
  mutate(across(starts_wit

Related Skills

View on GitHub
GitHub Stars6
CategoryEducation
Updated2mo ago
Forks2

Languages

R

Security Score

90/100

Audited on Jan 16, 2026

No findings