Skip to contents

Tuning spaces from the Bischl (2021) article.

Source

Bischl B, Binder M, Lang M, Pielok T, Richter J, Coors S, Thomas J, Ullmann T, Becker M, Boulesteix A, Deng D, Lindauer M (2021). “Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges.” 2107.05847, https://arxiv.org/abs/2107.05847.

glmnet tuning space

  • s \([1e-04, 10000]\)

  • alpha \([0, 1]\)

kknn tuning space

  • k \([1, 50]\)

  • distance \([1, 5]\)

  • kernel [“rectangular”, “optimal”, “epanechnikov”, “biweight”, “triweight”, “cos”, “inv”, “gaussian”, “rank”]

ranger tuning space

  • mtry.ratio \([0, 1]\)

  • replace [TRUE,FALSE]

  • sample.fraction \([0.1, 1]\)

  • num.trees \([1, 2000]\)

rpart tuning space

  • minsplit \([2, 128]\)

  • minbucket \([1, 64]\)

  • cp \([1e-04, 0.1]\)

svm tuning space

  • cost \([1e-04, 10000]\)

  • kernel [“polynomial”, “radial”, “sigmoid”, “linear”]

  • degree \([2, 5]\)

  • gamma \([1e-04, 10000]\)

xgboost tuning space

  • eta \([1e-04, 1]\)

  • nrounds \([1, 5000]\)

  • max_depth \([1, 20]\)

  • colsample_bytree \([0.1, 1]\)

  • colsample_bylevel \([0.1, 1]\)

  • lambda \([0.001, 1000]\)

  • alpha \([0.001, 1000]\)

  • subsample \([0.1, 1]\)