Welcome to the StratifiedMedicine R package. The overall goal of this package is to develop analytic and visualization tools to aid in stratified and personalized medicine. Stratified medicine aims to find subsets or subgroups of patients with similar treatment effects, for example responders vs non-responders, while personalized medicine aims to understand treatment effects at the individual level (does a specific individual respond to treatment A?). Development of this package is ongoing.

Currently, the main algorithm in this package is “PRISM” (Patient Response Identifiers for Stratified Medicine; Jemielita and Mehrotra 2019 in progress). Given a data-structure of \((Y, A, X)\) (outcome(s), treatments, covariates), PRISM is a five step procedure:

**Estimand**: Determine the question(s) or estimand(s) of interest. For example, \(\theta_0 = E(Y|A=1)-E(Y|A=0)\), where A is a binary treatment variable. While this isn’t an explicit step in the PRISM function, the question of interest guides how to set up PRISM.**Filter (filter)**: Reduce covariate space by removing variables unrelated to outcome/treatment. Formally: \[ filter(Y, A, X) \longrightarrow (Y, A, X^{\star}) \] where \(X^{\star}\) has potentially lower dimension than \(X\).**Patient-level estimate (ple)**: Estimate counterfactual patient-level quantities, for example the individual treatment effect, \(\theta(x) = E(Y|X=x,A=1)-E(Y|X=x,A=0)\). Formally: \[ ple(Y, A, X^{\star}) \longrightarrow \hat{\mathbf{\theta}}(X^{\star}) \] where \(\hat{\mathbf{\theta}}(X^{\star})\) is the matrix of patient-level estimates.**Subgroup model (submod)**: Partition the data into subsets of patients (likely with similar treatment effects). Formally: \[ submod(Y, A, X^{\star}, \hat{\mathbf{\theta}}(X^{\star})) \longrightarrow \mathbf{S}(X^{\star}) \] where \(\mathbf{S}(X^{\star})\) is a distinct set of rules that define the \(k=0,...,K\) discovered subgroups, for example \(\mathbf{S}(X^{\star}) = \{X_1=0, X_2=0\}\). Note that subgroups could be formed using the observed outcomes, PLEs, or both. By default, \(k=0\) corresponds to the overall population.**Parameter estimation and inference (param)**: For the overall population and discovered subgroups, output point estimates and variability metrics. Formally: \[ param(Y, A, X^{\star}, \hat{\mathbf{\theta}}(X^{\star}), \mathbf{S}(X^{\star}) ) \longrightarrow \{ \hat{\theta}_{k}, SE(\hat{\theta}_k), CI_{\alpha}(\hat{\theta}_{k}), P(\hat{\theta}_{k} > c) \} \text{ for } k=0,...K \] where \(\hat{\theta}_{k}\) is the point-estimate, \(SE(\hat{\theta}_k)\) is the standard error, \(CI_{\alpha}(\hat{\theta}_{k})\) is a two (or one) sided confidence interval with nominal coverage \(1-\alpha\), and \(P(\hat{\theta}_{k} > c)\) is a probability statement for some constant \(c\) (ex: \(c=0\)). These outputs are crucial for Go-No-Go decision making. There can also be multiple parameter estimates by subgroup, such as a survival curve across fixed time points).

Ultimately, PRISM provides information at the patient-level, the subgroup-level (if any), and the overall population. While there are defaults in place, the user can also input their own functions/model wrappers into the PRISM algorithm. We will demonstrate this later. PRISM can also run without treatment assignment (A=NULL); in this setting, the focus is on finding subgroups based on prognostic effects. Lastly, the below table describes default PRISM configurations for different family (gaussian, biomial, survival) and treatment (no treatment vs treatment) settings, including the associated estimands. Note that HR refers the hazard ratio while RMST refers to the restricted mean survival time.

`#> Warning: package 'knitr' was built under R version 3.5.2`

Step | gaussian | binomial | survival |
---|---|---|---|

estimand(s) | E(Y|A=0) E(Y|A=1) E(Y|A=1)-E(Y|A=0) |
E(Y|A=0) E(Y|A=1) E(Y|A=1)-E(Y|A=0) |
HR(A=1 vs A=0) |

filter | filter_glmnet | filter_glmnet | filter_glmnet |

ple | ple_ranger | ple_ranger | ple_ranger |

submod | submod_lmtree | submod_lmtree | submod_weibull |

param | param_ple | param_ple | param_ple |

Step | gaussian | binomial | survival |
---|---|---|---|

estimand(s) | E(Y) | Prob(Y) | RMST |

filter | filter_glmnet | filter_glmnet | filter_glmnet |

ple | ple_ranger | ple_ranger | ple_ranger |

submod | submod_ctree | submod_ctree | submod_ctree |

param | param_lm | param_lm | param_rmst |

Consider a continuous outcome (ex: % change in tumor size) with a binary treatment (study drug vs standard of care). The estimand of interest is the average treatment effect, \(\theta_0 = E(Y|A=1)-E(Y|A=0)\). First, we simulate continuous data where roughly 30% of the patients receive no treatment-benefit for using \(A=1\) vs \(A=0\). Responders vs non-responders are defined by the continuous predictive covariates \(X_1\) and \(X_2\) for a total of four subgroups. Subgroup treatment effects are: \(\theta_{1} = 0\) (\(X_1 \leq 0, X_2 \leq 0\)), \(\theta_{2} = 0.25 (X_1 > 0, X_2 \leq 0)\), \(\theta_{3} = 0.45 (X_1 \leq 0, X2 > 0\)), \(\theta_{4} = 0.65 (X_1>0, X_2>0)\).

```
library(ggplot2)
library(dplyr)
library(partykit)
library(StratifiedMedicine)
dat_ctns = generate_subgrp_data(family="gaussian")
Y = dat_ctns$Y
X = dat_ctns$X # 50 covariates, 46 are noise variables, X1 and X2 are truly predictive
A = dat_ctns$A # binary treatment, 1:1 randomized
length(Y)
#> [1] 800
table(A)
#> A
#> 0 1
#> 409 391
dim(X)
#> [1] 800 50
```

For continuous data (family=“gaussian”), the default PRISM configuration is: filter_glmnet (elastic net), ple_ranger (treatment-specific random forest models), submod_lmtree (model-based partitioning with OLS loss), and param_ple (parameter estimation/inference through the PLEs). Jemielita and Mehrotra 2019 (in progress) show that this configuration performs quite well in terms of bias, efficiency, coverage, and selecting the right predictive covariates. To run PRISM, at a minimum, the outcome (Y), treatment (A), and covariates (X) must be provided. See below.

```
# PRISM Default: filter_glmnet, ple_ranger, submod_lmtree, param_ple #
res0 = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_lmtree
#> Parameter Estimation: param_ple
summary(res0)
#> $`PRISM Configuration`
#> [1] "filter_glmnet => ple_ranger => submod_lmtree => param_ple"
#>
#> $`Variables that Pass Filter`
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24"
#> [12] "X26" "X31" "X40" "X46" "X50"
#>
#> $`Number of Identified Subgroups`
#> [1] 5
#>
#> $`Parameter Estimates`
#> Subgrps N estimand est SE alpha CI
#> 1 0 800 E(Y|A=0) 1.6378 0.0457 0.05 [1.5481,1.7275]
#> 4 3 149 E(Y|A=0) 1.2826 0.1118 0.05 [1.0618,1.5035]
#> 7 4 277 E(Y|A=0) 1.6094 0.0671 0.05 [1.4772,1.7415]
#> 10 7 99 E(Y|A=0) 1.5959 0.1413 0.05 [1.3155,1.8763]
#> 13 8 168 E(Y|A=0) 1.7675 0.0892 0.05 [1.5914,1.9436]
#> 16 9 107 E(Y|A=0) 2.0408 0.1350 0.05 [1.7731,2.3085]
#> 2 0 800 E(Y|A=1) 1.8429 0.0487 0.05 [1.7473,1.9384]
#> 5 3 149 E(Y|A=1) 1.3179 0.1098 0.05 [1.1009,1.5349]
#> 8 4 277 E(Y|A=1) 1.6780 0.0769 0.05 [1.5266,1.8294]
#> 11 7 99 E(Y|A=1) 1.9289 0.1305 0.05 [1.67,2.1879]
#> 14 8 168 E(Y|A=1) 2.0425 0.0978 0.05 [1.8495,2.2355]
#> 17 9 107 E(Y|A=1) 2.6076 0.1157 0.05 [2.3783,2.8369]
#> 3 0 800 E(Y|A=1)-E(Y|A=0) 0.2051 0.0633 0.05 [0.0809,0.3293]
#> 6 3 149 E(Y|A=1)-E(Y|A=0) 0.0353 0.1544 0.05 [-0.2698,0.3404]
#> 9 4 277 E(Y|A=1)-E(Y|A=0) 0.0686 0.1009 0.05 [-0.13,0.2672]
#> 12 7 99 E(Y|A=1)-E(Y|A=0) 0.3330 0.1899 0.05 [-0.0439,0.71]
#> 15 8 168 E(Y|A=1)-E(Y|A=0) 0.2750 0.1306 0.05 [0.0171,0.5329]
#> 18 9 107 E(Y|A=1)-E(Y|A=0) 0.5668 0.1767 0.05 [0.2164,0.9171]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res0) # same as plot(res0, type="submod")
```

```
## This is the same as running ##
# res1 = PRISM(Y=Y, A=A, X=X, family="gaussian", filter="filter_glmnet",
# ple = "ple_ranger", submod = "submod_lmtree", param="param_ple")
```

The summary gives a high-level overview of the findings (number of subgroups, parameter estimates, variables that survived the filter). The default Plot() function currently combines tree plots with parameter estimates using the “ggparty” package. We can also look directly for prognostic effects by specifying omitting A (treatment) from PRISM:

```
# PRISM Default: filter_glmnet, ple_ranger, submod_ctree, param_lm #
res_prog = PRISM(Y=Y, X=X)
#> No Treatment Variable (A) Provided: Searching for Prognostic Effects
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_ctree
#> Parameter Estimation: param_lm
summary(res_prog)
#> $`PRISM Configuration`
#> [1] "filter_glmnet => ple_ranger => submod_ctree => param_lm"
#>
#> $`Variables that Pass Filter`
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24"
#> [12] "X26" "X31" "X40" "X46" "X50"
#>
#> $`Number of Identified Subgroups`
#> [1] 6
#>
#> $`Parameter Estimates`
#> Subgrps N estimand est SE alpha CI
#> 1 0 800 E(Y) 1.6966 0.0372 0.05 [1.6235,1.7697]
#> 2 4 132 E(Y) 1.1119 0.0970 0.05 [0.92,1.3038]
#> 3 5 266 E(Y) 1.5107 0.0636 0.05 [1.3855,1.636]
#> 4 7 113 E(Y) 1.7016 0.0995 0.05 [1.5045,1.8987]
#> 5 8 122 E(Y) 2.1780 0.0856 0.05 [2.0085,2.3474]
#> 6 10 87 E(Y) 1.9091 0.1006 0.05 [1.7091,2.1091]
#> 7 11 80 E(Y) 2.6842 0.1133 0.05 [2.4586,2.9097]
#>
#> attr(,"class")
#> [1] "summary.PRISM"
plot(res_prog)
```

Next, circling back to the first PRISM model with treatment included, let’s review other core PRISM outputs and plotting functionality. Results relating to the filter include “filter.mod” (model output) and “filter.vars” (variables that pass the filter).

```
# elastic net model: loss by lambda #
plot(res0$filter.mod)
```

```
## Variables that remain after filtering ##
res0$filter.vars
#> [1] "X1" "X2" "X3" "X5" "X7" "X8" "X10" "X12" "X16" "X18" "X24"
#> [12] "X26" "X31" "X40" "X46" "X50"
# All predictive variables (X1,X2) and prognostic variables (X3,X5, X7) remains.
```

Results relating to the PLE model include “ple.mod” (model output), “mu.train” (training predictions), and “mu.test” (test predictions) where, for continuous or binary data, predictions are of E(Y|X,A=a) and E(Y|X,A=1)-E(Y|X,A=0). The PLEs, or individual treatment effects, are informative of the overall treatment heterogeneity and can be visualized through built-in density and waterfall plots. In this case, roughly 73% receive no benefit from treatment A=1 vs A=0. PRISM plots are built using “ggplot2”, making it easy to enhance plot visualizations. For example,

```
prob.PLE = mean(I(res0$mu_train$PLE>0))
# Density Plot #
plot(res0, type="PLE:density")+geom_vline(xintercept = 0) +
geom_text(x=1, y=0.4, label=paste("Prob(PLE>0)=", prob.PLE, sep=""))
```

```
# Waterfall Plot #
plot(res0, type="PLE:waterfall")+geom_vline(xintercept = 0) +
geom_text(x=200, y=1, label=paste("Prob(PLE>0)=", prob.PLE, sep=""))
```

Next, the subgroup model (lmtree), identifies 4-subgroups based on varying treatment effects. By plotting the subgroup model object (“submod.fit$mod”)“, we see that partitions are made through X1 (predictive) and X2 (predictive). At each node, parameter estimates for node (subgroup) specific OLS models, \(Y\sim \beta_0+\beta_1*A\). For example, patients in nodes 4 and 6 have estimated treatment effects of 0.47 and 0.06 respectively. Subgroup predictions for the train/test set can be found in the”out.train" and “out.test” data-sets.

`plot(res0$submod.fit$mod, terminal_panel = NULL)`

```
table(res0$out.train$Subgrps)
#>
#> 3 4 7 8 9
#> 149 277 99 168 107
table(res0$out.test$Subgrps)
#>
#> 3 4 7 8 9
#> 149 277 99 168 107
```

These estimates tend to be overly positive or negative, as the same data that trains the subgroup model is used to estimate the treatment effects. Resampling, such as bootstrapping, can generally be used for“de-biased” treatment effect estimates and obtain valid inference (more details later). An alternative approach without resampling is the use the PLEs for parameter estimation and inference (param=“param_ple”). Currently, “param_ple” is only implemented for continuous and binary data. For the overall population and each subgroup (\(s=0,,,.S\)), parameter estimates are obtained by averaging the PLEs:

\[ \hat{\theta}_k = \frac{1}{n_k} \sum_{i \in S_k} {\theta}(x_i) \] For SEs / CIs, we utilize “pseudo-outcomes”: \[ Y^{\star}_i = \frac{AY - (A-P(A|X))E(Y|A=1,X)}{P(A|X)} - \frac{(1-A)Y - (A-P(A|X))E(Y|A=0,X)}{1-P(A|X)}\] Note that \(E(Y^{\star}_i)=E(Y|A=1,X)-E(Y|A=0,X)\) and \(E(n_k^{-1}\sum_{i \in S_k} Y^{\star}_i)= E(Y|A=1, X \in S_k)-E(Y|A=0, X \in S_k)\). Next: \[SE(\hat{\theta}_k) = \sqrt{ n_k ^ {-2} \sum_{i \in S_k} \left( Y^{\star}_i-\hat{\theta}(x_i) \right)^2 } \] CIs can then be formed using t- or Z-intervals. For example, a two-sided 95% Z-interval, \(CI_{\alpha}(\hat{\theta}_{k}) = \left[\hat{\theta}_{k} \pm 1.96*SE(\hat{\theta}_k) \right]\)

Moving back to the PRISM outputs, for any of the provided “param” options, a key output is the object “param.dat”. By default, “param.dat” contain point-estimates, standard errors, lower/upper confidence intervals (depends on alpha_s and alpha_ovrl) and p-values. This output feeds directly into previously shown default (“submod”) and “forest” plot.

```
## Overall/subgroup specific parameter estimates/inference
res0$param.dat
#> Subgrps N estimand est SE LCL
#> 1 0 800 E(Y|A=0) 1.63776866 0.04569877 1.54806484
#> 2 0 800 E(Y|A=1) 1.84286992 0.04866618 1.74734124
#> 3 0 800 E(Y|A=1)-E(Y|A=0) 0.20510125 0.06328619 0.08087441
#> 4 3 149 E(Y|A=0) 1.28263595 0.11176578 1.06177307
#> 5 3 149 E(Y|A=1) 1.31794039 0.10980919 1.10094399
#> 6 3 149 E(Y|A=1)-E(Y|A=0) 0.03530445 0.15439447 -0.26979794
#> 7 4 277 E(Y|A=0) 1.60937815 0.06712763 1.47723094
#> 8 4 277 E(Y|A=1) 1.67799950 0.07693142 1.52655259
#> 9 4 277 E(Y|A=1)-E(Y|A=0) 0.06862135 0.10088192 -0.12997442
#> 10 7 99 E(Y|A=0) 1.59590822 0.14128296 1.31553678
#> 11 7 99 E(Y|A=1) 1.92893256 0.13048087 1.66999752
#> 12 7 99 E(Y|A=1)-E(Y|A=0) 0.33302434 0.18994596 -0.04391723
#> 13 8 168 E(Y|A=0) 1.76751942 0.08920057 1.59141332
#> 14 8 168 E(Y|A=1) 2.04252025 0.09776124 1.84951308
#> 15 8 168 E(Y|A=1)-E(Y|A=0) 0.27500084 0.13063657 0.01708886
#> 16 9 107 E(Y|A=0) 2.04080610 0.13504592 1.77306443
#> 17 9 107 E(Y|A=1) 2.60756289 0.11565560 2.37826441
#> 18 9 107 E(Y|A=1)-E(Y|A=0) 0.56675679 0.17669715 0.21643749
#> UCL pval alpha
#> 1 1.7274725 1.879424e-168 0.05
#> 2 1.9383986 1.723772e-180 0.05
#> 3 0.3293281 1.241074e-03 0.05
#> 4 1.5034988 3.314673e-22 0.05
#> 5 1.5349368 1.324710e-23 0.05
#> 6 0.3404068 8.194458e-01 0.05
#> 7 1.7415254 1.972697e-69 0.05
#> 8 1.8294464 5.334423e-62 0.05
#> 9 0.2672171 4.969388e-01 0.05
#> 10 1.8762797 1.916696e-19 0.05
#> 11 2.1878676 1.077698e-26 0.05
#> 12 0.7099659 8.268452e-02 0.05
#> 13 1.9436255 1.028800e-45 0.05
#> 14 2.2355274 1.856343e-48 0.05
#> 15 0.5329128 3.678032e-02 0.05
#> 16 2.3085478 3.359840e-28 0.05
#> 17 2.8368614 3.054161e-42 0.05
#> 18 0.9170761 1.770848e-03 0.05
## Forest plot: Overall/subgroup specific parameter estimates (CIs)
plot(res0, type="submod")
```

`plot(res0, type="forest")`

PLE Heatmaps can also be generated from PRISM outputs. To do this, based on a grid of values (with up to three variables), PLEs are estimated for each patient by fixing the grid variables to specific values. We then average the PLEs to obtain a point-estimate for each specific set of grid values, and can likewise calculate probabilities. See below; note that the heatmap is also consistent with the truth (treatment benefit for \(X_1>0, X_2>0\) patients).

```
grid.data = expand.grid(X1 = seq(min(X$X1), max(X$X1), by=0.30),
X2 = seq(min(X$X2), max(X$X2), by=0.30))
plot(res0, type="heatmap", grid.data = grid.data)
#> $heatmap.est
```

```
#>
#> $heatmap.prob
```

The hyper-parameters for the individual steps of PRISM can also be easily modified. For example, “filter_glmnet” by default selects covariates based on “lambda.min”, “ple_ranger” requires nodes to contain at least 10% of the total observations, and “submod_lmtree” requires nodes to contain at least 10% of the total observations. To modify this:

```
# PRISM Default: filter_glmnet, ple_ranger, submod_lmtree, param_ple #
# Change hyper-parameters #
res_new_hyper = PRISM(Y=Y, A=A, X=X, filter.hyper = list(lambda="lambda.1se"),
ple.hyper = list(min.node.pct=0.05),
submod.hyper = list(minsize=200), verbose=FALSE)
plot(res_new_hyper)
```

Survival outcomes are also allowed in PRISM. The default settings use glmnet to filter (“filter_glmnet”), ranger patient-level estimates (“ple_ranger”; for survival, the output is the restricted mean survival time treatment difference), “submod_weibull”" (MOB with weibull loss function) for subgroup identification, and param_cox (subgroup-specific cox regression models). Another subgroup option is to use “submod_ctree”“, which uses the conditional inference tree (ctree) algorithm to find subgroups; this looks for partitions irrespective of treatment assignment and thus corresponds to finding prognostic effects.

```
library(survival)
library(ggplot2)
# Load TH.data (no treatment; generate treatment randomly to simulate null effect) ##
data("GBSG2", package = "TH.data")
surv.dat = GBSG2
# Design Matrices ###
Y = with(surv.dat, Surv(time, cens))
X = surv.dat[,!(colnames(surv.dat) %in% c("time", "cens")) ]
A = rbinom( n = dim(X)[1], size=1, prob=0.5 )
# Default: filter_glmnet ==> ple_ranger (estimates patient-level RMST(1 vs 0) ==> submod_weibull (MOB with Weibull) ==> param_cox (Cox regression)
res_weibull1 = PRISM(Y=Y, A=A, X=X)
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_weibull
#> Parameter Estimation: param_cox
plot(res_weibull1, type="PLE:waterfall")
```

`plot(res_weibull1)`

```
# PRISM: filter_glmnet ==> submod_ctree ==> param_cox (Cox regression) #
res_ctree1 = PRISM(Y=Y, A=A, X=X, submod = "submod_ctree")
#> Observed Data
#> Filtering: filter_glmnet
#> PLE: ple_ranger
#> Subgroup Identification: submod_ctree
#> Parameter Estimation: param_cox
plot(res_ctree1)
```

Resampling methods are also a feature in PRISM. Bootstrap (resample=“Bootstrap”), permutation (resample=“Permutation”), and cross-validation (resample=“CV”) based-resampling are included. Resampling can be used for obtaining de-biased or “honest” subgroup estimates, inference, and/or probability statements. For each resampling method, the sampling mechanism can abestratified by the discovered subgroups (default: stratify=TRUE). To summarize:

**Bootstrap Resampling**

Given observed data \((Y, A, X)\), fit \(PRISM(Y,A,X)\). Based on the identified \(k=1,..,K\) subgroups, output subgroup assignment for each patient. For the overall population and each subgroup (\(k=0,...,K\)), store the associated parameter estimates (\(\hat{\theta}_{k}\)). For \(r=1,..,R\) resamples with replacement (\((Y_r, A_r, X_r)\)), fit \(PRISM(Y_r, A_r, X_r)\) and obtain new subgroup assignments \(k_r=1,..,K_r\) with associated parameter estimates \(\hat{\theta}_{k_r}\). The bootstrap estimate for the original identified subgroups are then estimated as: \[ \hat{\theta}_{rk} = \frac{\sum_{k_r} n(k \cap k_r) \hat{\theta_{k_r}}}{\sum_{k_r} n(k \cap k_r)} \] where \(n(k \cap k_r)\) is the number of subjects in the original subgroup \(k\) who are also in the bootstrap subgroup \(k_r\). The bootstrap smoothed estimate and standard error can then be calculated as: \[ \tilde{\theta}_{k} = R^{-1} \sum_r \hat{\theta}_{rk} \] \[ SE(\hat{\theta}_{k})_B = \sqrt{ R^{-1} \sum_r (\hat{\theta}_{rk}-\tilde{\theta}_{k})^2 } \] Bootstrap confidence intervals can then be formed, \(\left[\hat{\theta}_{k} \pm 1.96*SE(\hat{\theta}_k)_B \right]\). Alternatively, the percentile method based on the bootstrap distribution can be used. Bootstrap calibration, where the alpha level is adjusted such that we obtain on average \(1-\alpha\) coverage across all identified subgroups, is also implemented for PRISM (default: calibrate=TRUE). See Loh et al 2016 (GUIDE) for more details. Other metrics can also be calculated, such as bias (can be used to adjust initial subgroup estimates) and probability statements.

Returning to the survival example, we now re-run PRISM with 50 bootstrap resamples. The smoothed bootstrap estimates, bootstrap standard errors, bootstrap bias, percentile CI, and calibrated CI correspond to “est_resamp”, “SE_resamp”, “bias.boot”, “LCL.pct”/“UCL.pct”, and “LCL.calib”/“UCL.calib” respectively. We can also plot a density plot of the bootstrap distributions through the plot(…,type=“resample”) option.

```
library(ggplot2)
library(dplyr)
res_boot = PRISM(Y=Y, A=A, X=X, resample = "Bootstrap", R=50, ple = "None")
# Plot of distributions #
plot(res_boot, type="resample", estimand = "HR(A=1 vs A=0)")+geom_vline(xintercept = 1)
```

**Permutation Resampling**

Permutation resampling (resample=“Permutation”) follows the same general procedure as bootstrap resampling. The main difference is that we only randomly shuffle the treatment assignment \(A\) without replacement. This simulates the null hypothesis of no treatment. A key output is the permutation p-values (pval_perm in param.dat) and the permutation resampling distributions.

**Cross-Validation**

Cross-validation resampling (resample=“CV”) also follows the same general procedure as bootstrap resampling. Given observed data \((Y, A, X)\), fit \(PRISM(Y,A,X)\). Based on the identified \(k=1,..,K\) subgroups, output subgroup assignment for each patient. Next, split the data into \(R\) folds (ex: 5). For fold \(r\), fit PRISM on \((Y[-r],A[-r], X[-r])\) and predict the subgroup assignments for patients in fold \(r\). The data in fold \(r\) is then used to obtain parameter estimates for each subgroup, \(\hat{\theta}_{k_r}\). Estimates for the original subgroups are then obtained using the same formula as with the bootstrap. This is repeated for each fold, and “CV-corrected” subgroup parameter estimates can be obtained by averaging the fold specific subgroup estimates.

Overall, PRISM is a flexible algorithm that can aid in subgroup detection and exploration of heterogeneous treatment effects. Each step of PRISM is customizable, allowing for fast experimentation and improvement of individual steps. More details on creating user-specific models can be found in the “User_Specific_Models_PRISM” vignette User_Specific_Models. The StratifiedMedicine R package and PRISM will be continually updated and improved. User-feedback will further faciliate improvements.