SuperLearner Ensemble with R
1 SuperLearner Introduction
1.1 Outline
- Software requirements
- Background
- Create dataset
- Review available models
- Fit single models
- Fit ensemble
- Predict on new dataset
- Customize a model setting
- External cross-validation
- Test multiple hyperparameter settings
- Parallelize across CPUs
- Distribution of ensemble weights
- Feature selection (screening)
- Optimize for AUC
- XGBoost hyperparameter exploration
- Future topics
- References
1.2 Software requirements and installation
Make sure to have R 3.2 or greater, and preferably 3.3+.
Install the stable version of SuperLearner from CRAN:
Or the development version from Github:
# Install devtools first:
# install.packages("devtools")
devtools::install_github("ecpolley/SuperLearner")
The Github version generally has some new features, fixes some bugs, but may also introduce new bugs. We will use the github version, and if we run into any bugs we can report them. This material is also currently consistent with the latest SuperLearner on CRAN (2.0-21).
Install the other packages we will use:
For XGBoost we need to tweak the install command a bit; Windows users may need to install Rtools first.
1.3 Background
SuperLearner is an algorithm that uses cross-validation to estimate the performance of multiple machine learning models, or the same model with different settings. It then creates an optimal weighted average of those models, aka an “ensemble”, using the test data performance. This approach has been proven to be asymptotically as accurate as the best possible prediction algorithm that is tested.
(I am oversimplifying this in the interest of time. Please see the references for more detailed information, especially “SuperLearner in Prediction”.)
1.4 Create dataset
We will be using the “BreastCancer” dataset which is available from the “mlbench” package.
############################
# Setup test dataset from mlbench.
# NOTE: install mlbench package if you don't already have it.
data(BreastCancer, package="mlbench")
# Remove missing values - could impute for improved accuracy.
data = na.omit(BreastCancer)
# Set a seed for reproducibility in this random sampling.
set.seed(1)
# Remove Id and Class columns; expand out factors into indicators.
data2 = data.frame(model.matrix( ~ . - 1, subset(data, select = -c(Id, Class))))
# Check dimensions after we expand our dataset.
dim(data2)
## [1] 683 81
#library(caret)
# Remove zero variance (constant) and near-zero-variance columns.
# This can help reduce overfitting and also helps us use a basic glm().
# However, there is a slight risk that we are discarding helpful information.
preproc = caret::preProcess(data2, method = c("zv", "nzv"))
data2 = predict(preproc, data2)
rm(preproc)
# Review our dimensions.
dim(data2)
## [1] 683 52
# Reduce to a dataset of 100 observations to speed up model fitting.
train_obs = sample(nrow(data2), 100)
# X is our training sample.
X = data2[train_obs, ]
# Create a holdout set for evaluating model performance.
X_holdout = data2[-train_obs, ]
# Create a binary outcome variable.
outcome = as.numeric(data$Class == "malignant")
Y = outcome[train_obs]
Y_holdout = outcome[-train_obs]
# Review the outcome variable distribution.
table(Y, useNA = "ifany")
## Y
## 0 1
## 53 47
## 'data.frame': 100 obs. of 52 variables:
## $ Cl.thickness1 : num 0 0 0 1 0 1 0 0 0 0 ...
## $ Cl.thickness2 : num 0 0 0 0 1 0 0 0 0 0 ...
## $ Cl.thickness3 : num 1 0 0 0 0 0 0 0 0 0 ...
## $ Cl.thickness4 : num 0 0 0 0 0 0 0 1 0 0 ...
## $ Cl.thickness5 : num 0 1 0 0 0 0 1 0 1 0 ...
## $ Cl.thickness8 : num 0 0 0 0 0 0 0 0 0 1 ...
## $ Cl.thickness10 : num 0 0 0 0 0 0 0 0 0 0 ...
## $ Cell.size.L : num -0.495 -0.495 -0.055 -0.495 -0.495 ...
## $ Cell.size.Q : num 0.522 0.522 -0.348 0.522 0.522 ...
## $ Cell.size.C : num -0.453 -0.453 0.13 -0.453 -0.453 ...
## $ Cell.size.4 : num 0.337 0.337 0.337 0.337 0.337 ...
## $ Cell.size.5 : num -0.215 -0.215 -0.215 -0.215 -0.215 ...
## $ Cell.size.6 : num 0.117 0.117 -0.311 0.117 0.117 ...
## $ Cell.size.7 : num -0.0527 -0.0527 0.3279 -0.0527 -0.0527 ...
## $ Cell.size.8 : num 0.0187 0.0187 0.2618 0.0187 0.0187 ...
## $ Cell.size.9 : num -0.00454 -0.00454 -0.57143 -0.00454 -0.00454 ...
## $ Cell.shape.L : num -0.495 -0.275 0.055 -0.495 -0.495 ...
## $ Cell.shape.Q : num 0.522 -0.087 -0.348 0.522 0.522 ...
## $ Cell.shape.C : num -0.453 0.378 -0.13 -0.453 -0.453 ...
## $ Cell.shape.4 : num 0.337 -0.318 0.337 0.337 0.337 ...
## $ Cell.shape.5 : num -0.2148 -0.0358 0.2148 -0.2148 -0.2148 ...
## $ Cell.shape.6 : num 0.117 0.389 -0.311 0.117 0.117 ...
## $ Cell.shape.7 : num -0.0527 -0.5035 -0.3279 -0.0527 -0.0527 ...
## $ Cell.shape.8 : num 0.0187 0.374 0.2618 0.0187 0.0187 ...
## $ Cell.shape.9 : num -0.00454 -0.16327 0.57143 -0.00454 -0.00454 ...
## $ Marg.adhesion.L : num -0.495 -0.495 0.495 -0.275 -0.495 ...
## $ Marg.adhesion.Q : num 0.522 0.522 0.522 -0.087 0.522 ...
## $ Marg.adhesion.C : num -0.453 -0.453 0.453 0.378 -0.453 ...
## $ Marg.adhesion.4 : num 0.337 0.337 0.337 -0.318 0.337 ...
## $ Marg.adhesion.5 : num -0.2148 -0.2148 0.2148 -0.0358 -0.2148 ...
## $ Marg.adhesion.6 : num 0.117 0.117 0.117 0.389 0.117 ...
## $ Marg.adhesion.7 : num -0.0527 -0.0527 0.0527 -0.5035 -0.0527 ...
## $ Marg.adhesion.8 : num 0.0187 0.0187 0.0187 0.374 0.0187 ...
## $ Marg.adhesion.9 : num -0.00454 -0.00454 0.00454 -0.16327 -0.00454 ...
## $ Epith.c.size.L : num -0.275 -0.385 -0.165 -0.495 -0.275 ...
## $ Epith.c.size.Q : num -0.087 0.174 -0.261 0.522 -0.087 ...
## $ Epith.c.size.C : num 0.378 0.151 0.335 -0.453 0.378 ...
## $ Epith.c.size.4 : num -0.3179 -0.4114 0.0561 0.3366 -0.3179 ...
## $ Epith.c.size.5 : num -0.0358 0.5013 -0.3939 -0.2148 -0.0358 ...
## $ Epith.c.size.6 : num 0.389 -0.428 0.234 0.117 0.389 ...
## $ Epith.c.size.7 : num -0.5035 0.2752 0.2459 -0.0527 -0.5035 ...
## $ Epith.c.size.8 : num 0.374 -0.1309 -0.5236 0.0187 0.374 ...
## $ Epith.c.size.9 : num -0.16327 0.04082 0.38095 -0.00454 -0.16327 ...
## $ Bare.nuclei10 : num 0 0 1 0 0 0 0 0 0 1 ...
## $ Bl.cromatin2 : num 0 1 0 0 1 0 1 0 0 0 ...
## $ Bl.cromatin3 : num 0 0 0 0 0 1 0 0 0 0 ...
## $ Bl.cromatin4 : num 0 0 0 0 0 0 0 0 0 0 ...
## $ Bl.cromatin7 : num 0 0 0 0 0 0 0 1 0 0 ...
## $ Normal.nucleoli2 : num 0 0 0 0 0 0 0 0 0 0 ...
## $ Normal.nucleoli3 : num 0 0 1 0 0 0 0 1 0 0 ...
## $ Normal.nucleoli10: num 0 0 0 0 0 0 0 0 0 0 ...
## $ Mitoses2 : num 0 0 0 0 0 0 0 0 0 0 ...
1.5 Review available models
## All prediction algorithm wrappers in SuperLearner:
## [1] "SL.bartMachine" "SL.bayesglm" "SL.biglasso"
## [4] "SL.caret" "SL.caret.rpart" "SL.cforest"
## [7] "SL.earth" "SL.extraTrees" "SL.gam"
## [10] "SL.gbm" "SL.glm" "SL.glm.interaction"
## [13] "SL.glmnet" "SL.ipredbagg" "SL.kernelKnn"
## [16] "SL.knn" "SL.ksvm" "SL.lda"
## [19] "SL.leekasso" "SL.lm" "SL.loess"
## [22] "SL.logreg" "SL.mean" "SL.nnet"
## [25] "SL.nnls" "SL.polymars" "SL.qda"
## [28] "SL.randomForest" "SL.ranger" "SL.ridge"
## [31] "SL.rpart" "SL.rpartPrune" "SL.speedglm"
## [34] "SL.speedlm" "SL.step" "SL.step.forward"
## [37] "SL.step.interaction" "SL.stepAIC" "SL.svm"
## [40] "SL.template" "SL.xgboost"
##
## All screening algorithm wrappers in SuperLearner:
## [1] "All"
## [1] "screen.corP" "screen.corRank" "screen.glmnet"
## [4] "screen.randomForest" "screen.SIS" "screen.template"
## [7] "screen.ttest" "write.screen.template"
## function (Y, X, newX, family, obsWeights, id, alpha = 1, nfolds = 10,
## nlambda = 100, useMin = TRUE, loss = "deviance", ...)
## {
## .SL.require("glmnet")
## if (!is.matrix(X)) {
## X <- model.matrix(~-1 + ., X)
## newX <- model.matrix(~-1 + ., newX)
## }
## fitCV <- glmnet::cv.glmnet(x = X, y = Y, weights = obsWeights,
## lambda = NULL, type.measure = loss, nfolds = nfolds,
## family = family$family, alpha = alpha, nlambda = nlambda,
## ...)
## pred <- predict(fitCV, newx = newX, type = "response", s = ifelse(useMin,
## "lambda.min", "lambda.1se"))
## fit <- list(object = fitCV, useMin = useMin)
## class(fit) <- "SL.glmnet"
## out <- list(pred = pred, fit = fit)
## return(out)
## }
## <bytecode: 0x7fade3b6bd90>
## <environment: namespace:SuperLearner>
For maximum accuracy I recommend testing at least the following models: glmnet, randomForest, XGBoost, SVM, and bartMachine. These should be tested with multiple hyperparameter settings for each algorithm.
1.6 Fit single models
Let’s fit 2 separate models: lasso (sparse, penalized OLS) and randomForest. We specify family = binomial() because we are predicting a binary outcome, aka classification. With a continuous outcome we would specify family = gaussian().
set.seed(1)
# Fit lasso model.
sl_lasso = SuperLearner(Y = Y, X = X, family = binomial(), SL.library = "SL.glmnet")
## Loading required namespace: glmnet
##
## Call:
## SuperLearner(Y = Y, X = X, family = binomial(), SL.library = "SL.glmnet")
##
##
## Risk Coef
## SL.glmnet_All 0.0537245 1
## [1] "call" "libraryNames" "SL.library"
## [4] "SL.predict" "coef" "library.predict"
## [7] "Z" "cvRisk" "family"
## [10] "fitLibrary" "cvFitLibrary" "varNames"
## [13] "validRows" "method" "whichScreen"
## [16] "control" "cvControl" "errorsInCVLibrary"
## [19] "errorsInLibrary" "metaOptimizer" "env"
## [22] "times"
# Here is the risk of the best model (discrete SuperLearner winner).
sl_lasso$cvRisk[which.min(sl_lasso$cvRisk)]
## SL.glmnet_All
## 0.0537245
## List of 12
## $ lambda : num [1:85] 0.398 0.362 0.33 0.301 0.274 ...
## $ cvm : num [1:85] 1.42 1.32 1.22 1.14 1.07 ...
## $ cvsd : num [1:85] 0.0273 0.0265 0.0297 0.0352 0.0405 ...
## $ cvup : num [1:85] 1.44 1.34 1.25 1.18 1.11 ...
## $ cvlo : num [1:85] 1.39 1.29 1.19 1.11 1.03 ...
## $ nzero : Named int [1:85] 0 1 2 2 2 2 2 4 4 6 ...
## ..- attr(*, "names")= chr [1:85] "s0" "s1" "s2" "s3" ...
## $ call : language glmnet::cv.glmnet(x = X, y = Y, weights = obsWeights, lambda = NULL, type.measure = loss, nfolds = nfolds, f| __truncated__
## $ name : Named chr "Binomial Deviance"
## ..- attr(*, "names")= chr "deviance"
## $ glmnet.fit:List of 13
## ..$ a0 : Named num [1:85] -0.1201 -0.0524 0.0114 0.0753 0.137 ...
## .. ..- attr(*, "names")= chr [1:85] "s0" "s1" "s2" "s3" ...
## ..$ beta :Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
## .. .. ..@ i : int [1:1451] 7 7 16 7 16 7 16 7 16 7 ...
## .. .. ..@ p : int [1:86] 0 0 1 3 5 7 9 11 15 19 ...
## .. .. ..@ Dim : int [1:2] 52 85
## .. .. ..@ Dimnames:List of 2
## .. .. .. ..$ : chr [1:52] "Cl.thickness1" "Cl.thickness2" "Cl.thickness3" "Cl.thickness4" ...
## .. .. .. ..$ : chr [1:85] "s0" "s1" "s2" "s3" ...
## .. .. ..@ x : num [1:1451] 0.3846 0.6968 0.0524 0.8766 0.2379 ...
## .. .. ..@ factors : list()
## ..$ df : int [1:85] 0 1 2 2 2 2 2 4 4 6 ...
## ..$ dim : int [1:2] 52 85
## ..$ lambda : num [1:85] 0.398 0.362 0.33 0.301 0.274 ...
## ..$ dev.ratio : num [1:85] 1.12e-15 7.79e-02 1.45e-01 2.04e-01 2.55e-01 ...
## ..$ nulldev : num 138
## ..$ npasses : int 1006
## ..$ jerr : int 0
## ..$ offset : logi FALSE
## ..$ classnames: chr [1:2] "0" "1"
## ..$ call : language glmnet(x = X, y = Y, weights = obsWeights, lambda = NULL, family = family$family, alpha = alpha, nlambda = nlambda)
## ..$ nobs : int 100
## ..- attr(*, "class")= chr [1:2] "lognet" "glmnet"
## $ lambda.min: num 0.00799
## $ lambda.1se: num 0.0388
## $ index : int [1:2, 1] 43 26
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:2] "min" "1se"
## .. ..$ : chr "Lambda"
## - attr(*, "class")= chr "cv.glmnet"
# Fit random forest.
sl_rf = SuperLearner(Y = Y, X = X, family = binomial(), SL.library = "SL.randomForest")
## Loading required namespace: randomForest
##
## Call:
## SuperLearner(Y = Y, X = X, family = binomial(), SL.library = "SL.randomForest")
##
##
##
## Risk Coef
## SL.randomForest_All 0.05425658 1
Risk is a measure of model accuracy or performance. We want our models to minimize the estimated risk, which means the model is making the fewest mistakes in its prediction. It’s basically the mean-squared error in a regression model, but you can customize it if you want.
SuperLearner is using cross-validation to estimate the risk on future data. By default it uses 10 folds; use the cvControl argument to customize.
The coefficient column tells us the weight or importance of each individual learner in the overall ensemble. In this case we only have one algorithm so the coefficient has to be 1. If a coefficient is 0 it means that the algorithm isn’t being used in the SuperLearner ensemble.
1.7 Fit ensemble
Instead of fitting the models separately and looking at the performance (lowest risk), let’s fit them simultaneously. SuperLearner will then tell us which one is best (Discrete winner) and also create a weighted average of multiple models.
We include the mean of Y (“SL.mean”) as a benchmark algorithm. It is a very simple prediction so the more complex algorithms should do better than the sample mean. We hope to see that it isn’t the best single algorithm (discrete winner) and has a low weight in the weighted-average ensemble. If it is the best algorithm something has likely gone wrong.
set.seed(1)
sl = SuperLearner(Y = Y, X = X, family = binomial(),
SL.library = c("SL.mean", "SL.glmnet", "SL.randomForest"))
sl
##
## Call:
## SuperLearner(Y = Y, X = X, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", "SL.randomForest"))
##
##
## Risk Coef
## SL.mean_All 0.25240741 0.0000000
## SL.glmnet_All 0.05052966 0.2962233
## SL.randomForest_All 0.04702437 0.7037767
## user system elapsed
## 1.866 0.012 1.879
Again, the coefficient is how much weight SuperLearner puts on that model in the weighted-average. So if coefficient = 0 it means that model is not used at all. Here we see that Lasso is given all of the weight.
So we have an automatic ensemble of multiple learners based on the cross-validated performance of those learners, woo!
1.8 Predict on data
Now that we have an ensemble let’s predict back on our holdout dataset and review the results.
# Predict back on the holdout dataset.
# onlySL is set to TRUE so we don't fit algorithms that had weight = 0, saving computation.
pred = predict(sl, X_holdout, onlySL = T)
# Check the structure of this prediction object.
str(pred)
## List of 2
## $ pred : num [1:583, 1] 0.4646 0.0125 0.8912 0.0257 0.9516 ...
## $ library.predict: num [1:583, 1:3] 0 0 0 0 0 0 0 0 0 0 ...
## V1 V2 V3
## Min. :0 Min. :0.0008422 Min. :0.0170
## 1st Qu.:0 1st Qu.:0.0017465 1st Qu.:0.0180
## Median :0 Median :0.0123248 Median :0.0420
## Mean :0 Mean :0.3303019 Mean :0.3464
## 3rd Qu.:0 3rd Qu.:0.9380359 3rd Qu.:0.8065
## Max. :0 Max. :0.9999950 Max. :1.0000
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
# Scatterplot of original values (0, 1) and predicted values.
# Ideally we would use jitter or slight transparency to deal with overlap.
qplot(Y_holdout, pred$pred) + theme_bw()
# Review AUC - Area Under Curve
pred_rocr = ROCR::prediction(pred$pred, Y_holdout)
auc = ROCR::performance(pred_rocr, measure = "auc", x.measure = "cutoff")@y.values[[1]]
auc
## [1] 0.9828498
AUC can range from 0.5 (no better than chance) to 1.0 (perfect). So at 0.98 we are looking pretty good!
1.9 Fit ensemble with external cross-validation.
What we don’t have yet is an estimate of the performance of the ensemble itself. Right now we are just hopeful that the ensemble weights are successful in improving over the best single algorithm.
In order to estimate the performance of the SuperLearner ensemble we need an “external” layer of cross-validation. We generate a separate holdout sample that we don’t use to fit the SuperLearner, which allows it to be a good estimate of the SuperLearner’s performance on unseen data. Typically we would run 10 or 20-fold external cross-validation, but even 5-fold is reasonable.
Another nice result is that we get standard errors on the performance of the individual algorithms and can compare them to the SuperLearner.
set.seed(1)
# Don't have timing info for the CV.SuperLearner unfortunately.
# So we need to time it manually.
system.time({
# This will take about 5x as long as the previous SuperLearner.
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 5,
SL.library = c("SL.mean", "SL.glmnet", "SL.randomForest"))
})
## user system elapsed
## 7.760 0.027 7.792
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 5, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", "SL.randomForest"))
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 5
##
## Algorithm Ave se Min Max
## Super Learner 0.047086 0.0140517 0.014877 0.072310
## Discrete SL 0.047099 0.0136960 0.015197 0.071311
## SL.mean_All 0.251688 0.0034762 0.247656 0.257500
## SL.glmnet_All 0.059655 0.0174107 0.032515 0.088016
## SL.randomForest_All 0.046707 0.0142283 0.015197 0.071311
# Review the distribution of the best single learner as external CV folds.
table(simplify2array(cv_sl$whichDiscreteSL))
##
## SL.glmnet_All SL.randomForest_All
## 1 4
## Saving 7 x 5 in image
We see two SuperLearner results: “Super Learner” and “Discrete SL”. “Discrete SL” chooses the best single learner - in this case SL.glmnet (lasso). “Super Learner” takes a weighted average of the learners using the coefficients/weights that we examined earlier. In general “Super Learner” should perform a little better than “Discrete SL”.
We see based on the outer cross-validation that SuperLearner is statistically tying with the best algorithm. Our benchmark learner “SL.mean” shows that we getting a nice improvement over a naive guess based only on the mean. We could also add “SL.glm” to compare to logistic regression and would see that we have much better accuracy.
1.10 Customize a model hyperparameter
Hyperparameters are the configuration settings for an algorithm. OLS has no hyperparameters but essentially every other algorithm does.
There are two ways to customize a hyperparameter: make a new learner function, or use create.Learner().
Let’s make a variant of RandomForest that fits more trees, which may increase our accuracy and can’t hurt it (outside of small random variation).
## function (Y, X, newX, family, mtry = ifelse(family$family ==
## "gaussian", max(floor(ncol(X)/3), 1), floor(sqrt(ncol(X)))),
## ntree = 1000, nodesize = ifelse(family$family == "gaussian",
## 5, 1), maxnodes = NULL, importance = FALSE, ...)
## {
## .SL.require("randomForest")
## if (family$family == "gaussian") {
## fit.rf <- randomForest::randomForest(Y ~ ., data = X,
## ntree = ntree, xtest = newX, keep.forest = TRUE,
## mtry = mtry, nodesize = nodesize, maxnodes = maxnodes,
## importance = importance)
## pred <- fit.rf$test$predicted
## fit <- list(object = fit.rf)
## }
## if (family$family == "binomial") {
## fit.rf <- randomForest::randomForest(y = as.factor(Y),
## x = X, ntree = ntree, xtest = newX, keep.forest = TRUE,
## mtry = mtry, nodesize = nodesize, maxnodes = maxnodes,
## importance = importance)
## pred <- fit.rf$test$votes[, 2]
## fit <- list(object = fit.rf)
## }
## out <- list(pred = pred, fit = fit)
## class(out$fit) <- c("SL.randomForest")
## return(out)
## }
## <bytecode: 0x7fadf2774088>
## <environment: namespace:SuperLearner>
# Create a new function that changes just the ntree argument.
# (We could do this in a single line.)
# "..." means "all other arguments that were sent to the function"
SL.rf.better = function(...) {
SL.randomForest(..., ntree = 3000)
}
set.seed(1)
# Fit the CV.SuperLearner.
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 5,
SL.library = c("SL.mean", "SL.glmnet", "SL.rf.better", "SL.randomForest"))
# Review results.
summary(cv_sl)
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 5, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", "SL.rf.better", "SL.randomForest"))
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 5
##
## Algorithm Ave se Min Max
## Super Learner 0.047366 0.0147286 0.016236 0.072314
## Discrete SL 0.047023 0.0142379 0.017294 0.070526
## SL.mean_All 0.251688 0.0034762 0.247656 0.257500
## SL.glmnet_All 0.061472 0.0188583 0.032667 0.093601
## SL.rf.better_All 0.046255 0.0141409 0.016433 0.069647
## SL.randomForest_All 0.047536 0.0144537 0.017294 0.072209
Looks like our new RF is not improving performance. This implies that the original 500 trees had already reached the performance plateau - a maximum accuracy that RF can achieve unless other settings are changed (e.g. max nodes).
For comparison we can do the same hyperparamter customization with create.Learner().
# Customize the defaults for randomForest.
learners = create.Learner("SL.randomForest", params = list(ntree = 3000))
# Look at the object.
learners
## $grid
## NULL
##
## $names
## [1] "SL.randomForest_1"
##
## $base_learner
## [1] "SL.randomForest"
##
## $params
## $params$ntree
## [1] 3000
## [1] "SL.randomForest_1"
# Review the code that was automatically generated for the function.
# Notice that it's exactly the same as the function we made manually.
SL.randomForest_1
## function (...)
## SL.randomForest(..., ntree = 3000)
set.seed(1)
# Fit the CV.SuperLearner.
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 5,
SL.library = c("SL.mean", "SL.glmnet", learners$names, "SL.randomForest"))
# Review results.
summary(cv_sl)
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 5, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", learners$names, "SL.randomForest"))
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 5
##
## Algorithm Ave se Min Max
## Super Learner 0.047366 0.0147286 0.016236 0.072314
## Discrete SL 0.047023 0.0142379 0.017294 0.070526
## SL.mean_All 0.251688 0.0034762 0.247656 0.257500
## SL.glmnet_All 0.061472 0.0188583 0.032667 0.093601
## SL.randomForest_1_All 0.046255 0.0141409 0.016433 0.069647
## SL.randomForest_All 0.047536 0.0144537 0.017294 0.072209
We get exactly the same results between the two methods.
1.11 Fit multiple hyperparameters for a learner (e.g. RF)
The performance of an algorithm varies based on its hyperparamters, which again are its configuration settings. Some algorithms may not vary much, and others might have much better or worse performance for certain settings. Often we focus our attention on 1 or 2 hyperparameters for a given algorithm because they are the most important ones.
For randomForest there are two particularly important hyperparameters: mtry and maximum leaf nodes. Mtry is how many features are randomly chosen within each decision tree node. Maximum leaf nodes controls how complex each tree can get.
Let’s try 3 different mtry options.
## [1] 7
# Let's try 3 multiplies of this default: 0.5, 1, and 2.
mtry_seq = floor(sqrt(ncol(X)) * c(0.5, 1, 2))
mtry_seq
## [1] 3 7 14
learners = create.Learner("SL.randomForest", tune = list(mtry = mtry_seq))
# Review the resulting object
learners
## $grid
## mtry
## 1 3
## 2 7
## 3 14
##
## $names
## [1] "SL.randomForest_1" "SL.randomForest_2" "SL.randomForest_3"
##
## $base_learner
## [1] "SL.randomForest"
##
## $params
## list()
## function (...)
## SL.randomForest(..., mtry = 3)
## function (...)
## SL.randomForest(..., mtry = 7)
## function (...)
## SL.randomForest(..., mtry = 14)
set.seed(1)
# Fit the CV.SuperLearner.
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 5,
SL.library = c("SL.mean", "SL.glmnet", learners$names, "SL.randomForest"))
# Review results.
summary(cv_sl)
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 5, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", learners$names, "SL.randomForest"))
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 5
##
## Algorithm Ave se Min Max
## Super Learner 0.048347 0.0146463 0.018151 0.070174
## Discrete SL 0.050907 0.0155736 0.018430 0.073063
## SL.mean_All 0.251688 0.0034762 0.247656 0.257500
## SL.glmnet_All 0.059873 0.0183193 0.032565 0.080905
## SL.randomForest_1_All 0.047292 0.0127504 0.018430 0.067635
## SL.randomForest_2_All 0.046258 0.0141286 0.015514 0.070680
## SL.randomForest_3_All 0.048507 0.0155451 0.016368 0.076339
## SL.randomForest_All 0.045716 0.0138511 0.016448 0.069924
We see here that mtry = 14 performed a little bit better than mtry = 3 or mtry = 7, although the difference is not significant. If we used more data and more cross-validation folds we might see more drastic differences. A higher mtry does better when a small percentage of variables are predictive of the outcome, because it gives each tree a better chance of finding a useful variable.
Note that SL.randomForest and SL.randomForest_2 have the same settings, and their performance is very similar - statistically a tie. It’s not exactly equivalent due to random variation in the two forests.
A key difference with SuperLearner over caret or other frameworks is that we are not trying to choose the single best hyperparameter or model. Instead, we just want the best weighted average. So we are including all of the different settings in our SuperLearner, and we may choose a weighted average that includes the same model multiple times but with different settings. That can give us better performance than choosing only the single best settings for a given algorithm (which has some random noise in any case).
1.12 Multicore parallelization
SuperLearner makes it easy to use multiple CPU cores on your computer to speed up the calculations. We need to tell R to use multiple CPUs, then tell CV.SuperLearner
to use multiple cores.
# Setup parallel computation - use all cores on our computer.
# (Install "parallel" and "RhpcBLASctl" if you don't already have those packages.)
num_cores = RhpcBLASctl::get_num_cores()
# How many cores does this computer have?
num_cores
## [1] 8
# Use all of those cores for parallel SuperLearner.
options(mc.cores = num_cores)
# Check how many parallel workers we are using:
getOption("mc.cores")
## [1] 8
# We need to set a different type of seed that works across cores.
# Otherwise the other cores will go rogue and we won't get repeatable results.
set.seed(1, "L'Ecuyer-CMRG")
# Fit the CV.SuperLearner.
# While this is running check CPU using in Activity Monitor / Task Manager.
system.time({
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 10,
parallel = "multicore",
SL.library = c("SL.mean", "SL.glmnet", learners$names, "SL.randomForest"))
})
## Loading required namespace: parallel
## user system elapsed
## 48.586 5.581 14.131
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 10, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", learners$names, "SL.randomForest"), parallel = "multicore")
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 10
##
## Algorithm Ave se Min Max
## Super Learner 0.051738 0.0148178 0.010528 0.085671
## Discrete SL 0.054168 0.0158803 0.010528 0.091215
## SL.mean_All 0.252877 0.0033603 0.245679 0.275309
## SL.glmnet_All 0.055457 0.0176376 0.011410 0.099601
## SL.randomForest_1_All 0.049063 0.0132454 0.013374 0.087650
## SL.randomForest_2_All 0.050621 0.0148980 0.010528 0.091215
## SL.randomForest_3_All 0.054359 0.0166724 0.011041 0.097097
## SL.randomForest_All 0.052176 0.0154922 0.010258 0.094506
The “user” component of time is essentially how long it would take on a single core. And the “elapsed” component is how long it actually took. So we can see some gain from using multiple cores.
If we want to use multiple cores for normal SuperLearner, not CV.SuperLearner (i.e. external cross-validation to estimate performance), we need to change the function name to mcSuperLearner
.
# Set multicore compatible seed.
set.seed(1, "L'Ecuyer-CMRG")
# Fit the SuperLearner.
sl = mcSuperLearner(Y = Y, X = X, family = binomial(),
SL.library = c("SL.mean", "SL.glmnet", learners$names, "SL.randomForest"))
sl
##
## Call:
## mcSuperLearner(Y = Y, X = X, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", learners$names, "SL.randomForest"))
##
##
## Risk Coef
## SL.mean_All 0.25287654 0.0000000
## SL.glmnet_All 0.05190623 0.3745924
## SL.randomForest_1_All 0.04946344 0.6254076
## SL.randomForest_2_All 0.05259529 0.0000000
## SL.randomForest_3_All 0.05411438 0.0000000
## SL.randomForest_All 0.05135655 0.0000000
## user system elapsed
## 6.627 0.896 1.758
SuperLearner also supports running across multiple computers at a time, called “multi-node” or “cluster” computing. See examples in ?SuperLearner
using snowSuperLearner()
, and stay tuned for a future training on highly parallel SuperLearning; h2o.ai will also cover this.
1.13 Weight distribution for SuperLearner
The weights or coefficients of the SuperLearner are stochastic - they will change as the data changes. So we don’t necessarily trust a given set of weights as being the “true” weights, but when we use CV.SuperLearner we at least have multiple samples from the distribution of the weights.
We can write a little function to extract the weights at each CV.SuperLearner iteration and summarize the distribution of those weights. (I’m going to try to get this added to the SuperLearner package sometime soon.)
# Review meta-weights (coefficients) from a CV.SuperLearner object
review_weights = function(cv_sl) {
meta_weights = coef(cv_sl)
means = colMeans(meta_weights)
sds = apply(meta_weights, MARGIN = 2, FUN = function(col) { sd(col) })
mins = apply(meta_weights, MARGIN = 2, FUN = function(col) { min(col) })
maxs = apply(meta_weights, MARGIN = 2, FUN = function(col) { max(col) })
# Combine the stats into a single matrix.
sl_stats = cbind("mean(weight)" = means, "sd" = sds, "min" = mins, "max" = maxs)
# Sort by decreasing mean weight.
sl_stats[order(sl_stats[, 1], decreasing = T), ]
}
print(review_weights(cv_sl), digits = 3)
## mean(weight) sd min max
## SL.randomForest_1_All 0.5968 0.319 0 1.000
## SL.glmnet_All 0.1569 0.206 0 0.655
## SL.randomForest_2_All 0.1552 0.334 0 1.000
## SL.randomForest_3_All 0.0912 0.128 0 0.347
## SL.mean_All 0.0000 0.000 0 0.000
## SL.randomForest_All 0.0000 0.000 0 0.000
Notice that in this case the ensemble never uses the mean or the two randomForests with default mtry settings. So adding multiple configurations of randomForest was helpful.
I recommend reviewing the weight distribution for any SuperLearner project to better understand which algorithms are chosen for the ensemble.
1.14 Feature selection (screening)
When datasets have many covariates our algorithms may benefit from first choosing a subset of available covariates, a step called feature selection. Then we pass only those variable to the modeling algorithm, and it may be less likely to overfit to variables that are not related to the outcome.
Let’s revisit listWrappers()
and check out the bottom section.
## All prediction algorithm wrappers in SuperLearner:
## [1] "SL.bartMachine" "SL.bayesglm" "SL.biglasso"
## [4] "SL.caret" "SL.caret.rpart" "SL.cforest"
## [7] "SL.earth" "SL.extraTrees" "SL.gam"
## [10] "SL.gbm" "SL.glm" "SL.glm.interaction"
## [13] "SL.glmnet" "SL.ipredbagg" "SL.kernelKnn"
## [16] "SL.knn" "SL.ksvm" "SL.lda"
## [19] "SL.leekasso" "SL.lm" "SL.loess"
## [22] "SL.logreg" "SL.mean" "SL.nnet"
## [25] "SL.nnls" "SL.polymars" "SL.qda"
## [28] "SL.randomForest" "SL.ranger" "SL.ridge"
## [31] "SL.rpart" "SL.rpartPrune" "SL.speedglm"
## [34] "SL.speedlm" "SL.step" "SL.step.forward"
## [37] "SL.step.interaction" "SL.stepAIC" "SL.svm"
## [40] "SL.template" "SL.xgboost"
##
## All screening algorithm wrappers in SuperLearner:
## [1] "All"
## [1] "screen.corP" "screen.corRank" "screen.glmnet"
## [4] "screen.randomForest" "screen.SIS" "screen.template"
## [7] "screen.ttest" "write.screen.template"
## function (Y, X, family, obsWeights, id, method = "pearson", minPvalue = 0.1,
## minscreen = 2, ...)
## {
## listp <- apply(X, 2, function(x, Y, method) {
## ifelse(var(x) <= 0, 1, cor.test(x, y = Y, method = method)$p.value)
## }, Y = Y, method = method)
## whichVariable <- (listp <= minPvalue)
## if (sum(whichVariable) < minscreen) {
## warning("number of variables with p value less than minPvalue is less than minscreen")
## whichVariable[rank(listp) <= minscreen] <- TRUE
## }
## return(whichVariable)
## }
## <bytecode: 0x7fade0bc3388>
## <environment: namespace:SuperLearner>
set.seed(1)
# Fit the SuperLearner.
# We need to use list() instead of c().
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 10, parallel = "multicore",
SL.library = list("SL.mean", "SL.glmnet", c("SL.glmnet", "screen.corP")))
summary(cv_sl)
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 10, family = binomial(), SL.library = list("SL.mean",
## "SL.glmnet", c("SL.glmnet", "screen.corP")), parallel = "multicore")
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 10
##
## Algorithm Ave se Min Max
## Super Learner 0.053945 0.0176982 0.0076111 0.097415
## Discrete SL 0.054328 0.0181955 0.0067969 0.099644
## SL.mean_All 0.252877 0.0033603 0.2456790 0.275309
## SL.glmnet_All 0.050763 0.0174886 0.0067969 0.099644
## SL.glmnet_screen.corP 0.057610 0.0184657 0.0071807 0.107504
We see a small performance boost by first screening by univarate correlation with our outcome, and only keeping variables with a p-value less than 0.10. Try using some of the other screening algorithms as they may do even better for a particular dataset.
1.15 Optimize for AUC
For binary prediction we are typically trying to maximize AUC, which can be the best performance metric when our outcome variable has some imbalance. In other words, we don’t have exactly 50% 1s and 50% 0s in our outcome. Our SuperLearner is not targeting AUC by default, but it can if we tell it to by specifying our method.
set.seed(1)
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 5, method = "method.AUC",
SL.library = list("SL.mean", "SL.glmnet", c("SL.glmnet", "screen.corP")))
## Loading required package: cvAUC
## Loading required package: ROCR
## Loading required package: data.table
##
## cvAUC version: 1.1.0
## Notice to cvAUC users: Major speed improvements in version 1.1.0
##
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 5, family = binomial(), SL.library = list("SL.mean",
## "SL.glmnet", c("SL.glmnet", "screen.corP")), method = "method.AUC")
##
## Risk is based on: Area under ROC curve (AUC)
##
## All risk estimates are based on V = 5
##
## Algorithm Ave se Min Max
## Super Learner 0.98358 NA 0.95833 1.0
## Discrete SL 0.98150 NA 0.95833 1.0
## SL.mean_All 0.50000 NA 0.50000 0.5
## SL.glmnet_All 0.98358 NA 0.95833 1.0
## SL.glmnet_screen.corP 0.98567 NA 0.97000 1.0
This conveniently shows us the AUC for each algorithm without us having to calculate it manually. But we aren’t getting SEs sadly.
Another important optimizer to consider non-negative log likelihood, which is intended for binary outcomes and will often work better than NNLS (the default). This is specified by method = “NNloglik”.
1.16 XGBoost hyperparameter exploration
XGBoost is a version of GBM that is even faster and has some extra settings. GBM’s adaptivity is determined by its configuration, so we want to thoroughly test a wide range of configurations for any given problem. Let’s do 60 now. This will take a good amount of time (~7 minutes on my computer) so we need to at least use multiple cores, if not multiple computers.
# 5 * 4 * 3 = 60 different configurations.
tune = list(ntrees = c(200, 500, 1000, 2000, 5000),
max_depth = 1:4,
shrinkage = c(0.001, 0.01, 0.1))
# Set detailed names = T so we can see the configuration for each function.
# Also shorten the name prefix.
learners = create.Learner("SL.xgboost", tune = tune, detailed_names = T, name_prefix = "xgb")
# 60 configurations - not too shabby.
length(learners$names)
## [1] 60
## [1] "xgb_200_1_0.001" "xgb_500_1_0.001" "xgb_1000_1_0.001" "xgb_2000_1_0.001"
## [5] "xgb_5000_1_0.001" "xgb_200_2_0.001" "xgb_500_2_0.001" "xgb_1000_2_0.001"
## [9] "xgb_2000_2_0.001" "xgb_5000_2_0.001" "xgb_200_3_0.001" "xgb_500_3_0.001"
## [13] "xgb_1000_3_0.001" "xgb_2000_3_0.001" "xgb_5000_3_0.001" "xgb_200_4_0.001"
## [17] "xgb_500_4_0.001" "xgb_1000_4_0.001" "xgb_2000_4_0.001" "xgb_5000_4_0.001"
## [21] "xgb_200_1_0.01" "xgb_500_1_0.01" "xgb_1000_1_0.01" "xgb_2000_1_0.01"
## [25] "xgb_5000_1_0.01" "xgb_200_2_0.01" "xgb_500_2_0.01" "xgb_1000_2_0.01"
## [29] "xgb_2000_2_0.01" "xgb_5000_2_0.01" "xgb_200_3_0.01" "xgb_500_3_0.01"
## [33] "xgb_1000_3_0.01" "xgb_2000_3_0.01" "xgb_5000_3_0.01" "xgb_200_4_0.01"
## [37] "xgb_500_4_0.01" "xgb_1000_4_0.01" "xgb_2000_4_0.01" "xgb_5000_4_0.01"
## [41] "xgb_200_1_0.1" "xgb_500_1_0.1" "xgb_1000_1_0.1" "xgb_2000_1_0.1"
## [45] "xgb_5000_1_0.1" "xgb_200_2_0.1" "xgb_500_2_0.1" "xgb_1000_2_0.1"
## [49] "xgb_2000_2_0.1" "xgb_5000_2_0.1" "xgb_200_3_0.1" "xgb_500_3_0.1"
## [53] "xgb_1000_3_0.1" "xgb_2000_3_0.1" "xgb_5000_3_0.1" "xgb_200_4_0.1"
## [57] "xgb_500_4_0.1" "xgb_1000_4_0.1" "xgb_2000_4_0.1" "xgb_5000_4_0.1"
## [1] 8
# Remember to set multicore-compatible seed.
set.seed(1, "L'Ecuyer-CMRG")
# Fit the CV.SuperLearner. This will take 5-15 minutes.
system.time({
cv_sl = CV.SuperLearner(Y = Y, X = X, family = binomial(), V = 5, parallel = "multicore",
SL.library = c("SL.mean", "SL.glmnet", learners$names, "SL.randomForest"))
})
## user system elapsed
## 1040.534 165.460 396.831
##
## Call:
## CV.SuperLearner(Y = Y, X = X, V = 5, family = binomial(), SL.library = c("SL.mean",
## "SL.glmnet", learners$names, "SL.randomForest"), parallel = "multicore")
##
## Risk is based on: Mean Squared Error
##
## All risk estimates are based on V = 5
##
## Algorithm Ave se Min Max
## Super Learner 0.060078 0.01620045 0.0476525 0.077261
## Discrete SL 0.060078 0.01620045 0.0476525 0.077261
## SL.mean_All 0.252250 0.00357752 0.2476562 0.266406
## SL.glmnet_All 0.044856 0.01481325 0.0093352 0.060806
## xgb_200_1_0.001_All 0.249888 0.00061482 0.2492171 0.251987
## xgb_500_1_0.001_All 0.250235 0.00134680 0.2487389 0.254997
## xgb_1000_1_0.001_All 0.250763 0.00218798 0.2481870 0.258850
## xgb_2000_1_0.001_All 0.251434 0.00303791 0.2476490 0.263167
## xgb_5000_1_0.001_All 0.251897 0.00354261 0.2473389 0.265929
## xgb_200_2_0.001_All 0.249888 0.00061482 0.2492171 0.251987
## xgb_500_2_0.001_All 0.250235 0.00134680 0.2487389 0.254997
## xgb_1000_2_0.001_All 0.250763 0.00218798 0.2481870 0.258850
## xgb_2000_2_0.001_All 0.251434 0.00303791 0.2476490 0.263167
## xgb_5000_2_0.001_All 0.251897 0.00354261 0.2473389 0.265929
## xgb_200_3_0.001_All 0.249888 0.00061482 0.2492171 0.251987
## xgb_500_3_0.001_All 0.250235 0.00134680 0.2487389 0.254997
## xgb_1000_3_0.001_All 0.250763 0.00218798 0.2481870 0.258850
## xgb_2000_3_0.001_All 0.251434 0.00303791 0.2476490 0.263167
## xgb_5000_3_0.001_All 0.251897 0.00354261 0.2473389 0.265929
## xgb_200_4_0.001_All 0.249888 0.00061482 0.2492171 0.251987
## xgb_500_4_0.001_All 0.250235 0.00134680 0.2487389 0.254997
## xgb_1000_4_0.001_All 0.250763 0.00218798 0.2481870 0.258850
## xgb_2000_4_0.001_All 0.251434 0.00303791 0.2476490 0.263167
## xgb_5000_4_0.001_All 0.251897 0.00354261 0.2473389 0.265929
## xgb_200_1_0.01_All 0.248542 0.00301311 0.2446381 0.260466
## xgb_500_1_0.01_All 0.249004 0.00351074 0.2443307 0.263206
## xgb_1000_1_0.01_All 0.249033 0.00354046 0.2443126 0.263374
## xgb_2000_1_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_5000_1_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_200_2_0.01_All 0.248542 0.00301311 0.2446381 0.260466
## xgb_500_2_0.01_All 0.249004 0.00351074 0.2443307 0.263206
## xgb_1000_2_0.01_All 0.249033 0.00354046 0.2443126 0.263374
## xgb_2000_2_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_5000_2_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_200_3_0.01_All 0.248542 0.00301311 0.2446381 0.260466
## xgb_500_3_0.01_All 0.249004 0.00351074 0.2443307 0.263206
## xgb_1000_3_0.01_All 0.249033 0.00354046 0.2443126 0.263374
## xgb_2000_3_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_5000_3_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_200_4_0.01_All 0.248542 0.00301311 0.2446381 0.260466
## xgb_500_4_0.01_All 0.249004 0.00351074 0.2443307 0.263206
## xgb_1000_4_0.01_All 0.249033 0.00354046 0.2443126 0.263374
## xgb_2000_4_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_5000_4_0.01_All 0.249033 0.00354070 0.2443125 0.263376
## xgb_200_1_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_500_1_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_1000_1_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_2000_1_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_5000_1_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_200_2_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_500_2_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_1000_2_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_2000_2_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_5000_2_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_200_3_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_500_3_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_1000_3_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_2000_3_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_5000_3_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_200_4_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_500_4_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_1000_4_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_2000_4_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## xgb_5000_4_0.1_All 0.221398 0.00366105 0.2131390 0.237089
## SL.randomForest_All 0.059964 0.01524596 0.0470806 0.077261
## mean(weight) sd min max
## SL.randomForest_All 0.8 0.4472136 0 1
## SL.glmnet_All 0.2 0.4472136 0 1
## SL.mean_All 0.0 0.0000000 0 0
## xgb_200_1_0.001_All 0.0 0.0000000 0 0
## xgb_500_1_0.001_All 0.0 0.0000000 0 0
## xgb_1000_1_0.001_All 0.0 0.0000000 0 0
## xgb_2000_1_0.001_All 0.0 0.0000000 0 0
## xgb_5000_1_0.001_All 0.0 0.0000000 0 0
## xgb_200_2_0.001_All 0.0 0.0000000 0 0
## xgb_500_2_0.001_All 0.0 0.0000000 0 0
## xgb_1000_2_0.001_All 0.0 0.0000000 0 0
## xgb_2000_2_0.001_All 0.0 0.0000000 0 0
## xgb_5000_2_0.001_All 0.0 0.0000000 0 0
## xgb_200_3_0.001_All 0.0 0.0000000 0 0
## xgb_500_3_0.001_All 0.0 0.0000000 0 0
## xgb_1000_3_0.001_All 0.0 0.0000000 0 0
## xgb_2000_3_0.001_All 0.0 0.0000000 0 0
## xgb_5000_3_0.001_All 0.0 0.0000000 0 0
## xgb_200_4_0.001_All 0.0 0.0000000 0 0
## xgb_500_4_0.001_All 0.0 0.0000000 0 0
## xgb_1000_4_0.001_All 0.0 0.0000000 0 0
## xgb_2000_4_0.001_All 0.0 0.0000000 0 0
## xgb_5000_4_0.001_All 0.0 0.0000000 0 0
## xgb_200_1_0.01_All 0.0 0.0000000 0 0
## xgb_500_1_0.01_All 0.0 0.0000000 0 0
## xgb_1000_1_0.01_All 0.0 0.0000000 0 0
## xgb_2000_1_0.01_All 0.0 0.0000000 0 0
## xgb_5000_1_0.01_All 0.0 0.0000000 0 0
## xgb_200_2_0.01_All 0.0 0.0000000 0 0
## xgb_500_2_0.01_All 0.0 0.0000000 0 0
## xgb_1000_2_0.01_All 0.0 0.0000000 0 0
## xgb_2000_2_0.01_All 0.0 0.0000000 0 0
## xgb_5000_2_0.01_All 0.0 0.0000000 0 0
## xgb_200_3_0.01_All 0.0 0.0000000 0 0
## xgb_500_3_0.01_All 0.0 0.0000000 0 0
## xgb_1000_3_0.01_All 0.0 0.0000000 0 0
## xgb_2000_3_0.01_All 0.0 0.0000000 0 0
## xgb_5000_3_0.01_All 0.0 0.0000000 0 0
## xgb_200_4_0.01_All 0.0 0.0000000 0 0
## xgb_500_4_0.01_All 0.0 0.0000000 0 0
## xgb_1000_4_0.01_All 0.0 0.0000000 0 0
## xgb_2000_4_0.01_All 0.0 0.0000000 0 0
## xgb_5000_4_0.01_All 0.0 0.0000000 0 0
## xgb_200_1_0.1_All 0.0 0.0000000 0 0
## xgb_500_1_0.1_All 0.0 0.0000000 0 0
## xgb_1000_1_0.1_All 0.0 0.0000000 0 0
## xgb_2000_1_0.1_All 0.0 0.0000000 0 0
## xgb_5000_1_0.1_All 0.0 0.0000000 0 0
## xgb_200_2_0.1_All 0.0 0.0000000 0 0
## xgb_500_2_0.1_All 0.0 0.0000000 0 0
## xgb_1000_2_0.1_All 0.0 0.0000000 0 0
## xgb_2000_2_0.1_All 0.0 0.0000000 0 0
## xgb_5000_2_0.1_All 0.0 0.0000000 0 0
## xgb_200_3_0.1_All 0.0 0.0000000 0 0
## xgb_500_3_0.1_All 0.0 0.0000000 0 0
## xgb_1000_3_0.1_All 0.0 0.0000000 0 0
## xgb_2000_3_0.1_All 0.0 0.0000000 0 0
## xgb_5000_3_0.1_All 0.0 0.0000000 0 0
## xgb_200_4_0.1_All 0.0 0.0000000 0 0
## xgb_500_4_0.1_All 0.0 0.0000000 0 0
## xgb_1000_4_0.1_All 0.0 0.0000000 0 0
## xgb_2000_4_0.1_All 0.0 0.0000000 0 0
## xgb_5000_4_0.1_All 0.0 0.0000000 0 0
We can see how stochastic the weights are for an individual execution of SuperLearner.
Troubleshooting
- If you get an error about predict for xgb.Booster, you probably need to install the latest version of XGBoost from github.