Skip to contents

Tuning spaces from the Kuehn (2018) article.

Source

Kuehn D, Probst P, Thomas J, Bischl B (2018). “Automatic Exploration of Machine Learning Experiments on OpenML.” 1806.10961, https://arxiv.org/abs/1806.10961.

glmnet tuning space

  • alpha \([0, 1]\)

  • s \([1e-04, 1000]\)

kknn tuning space

  • k \([1, 30]\)

ranger tuning space

  • num.trees \([1, 2000]\)

  • replace [TRUE,FALSE]

  • sample.fraction \([0.1, 1]\)

  • mtry.ratio \([0, 1]\)

  • respect.unordered.factors [“ignore”, “order”, “partition”]

  • min.node.size \([1, 100]\)

  • splitrule [“gini”, “extratrees”]

  • num.random.splits \([1, 100]\)

mtry.power is replaced by mtry.ratio.

rpart tuning space

  • cp \([1e-04, 1]\)

  • maxdepth \([1, 30]\)

  • minbucket \([1, 100]\)

  • minsplit \([1, 100]\)

svm tuning space

  • kernel [“linear”, “polynomial”, “radial”]

  • cost \([1e-04, 1000]\)

  • gamma \([1e-04, 1000]\)

  • tolerance \([1e-04, 2]\)

  • degree \([2, 5]\)

xgboost tuning space

  • booster [“gblinear”, “gbtree”, “dart”]

  • nrounds \([2, 8]\)

  • eta \([1e-04, 1]\)

  • gamma \([1e-05, 7]\)

  • lambda \([1e-04, 1000]\)

  • alpha \([1e-04, 1000]\)

  • subsample \([0.1, 1]\)

  • max_depth \([1, 15]\)

  • min_child_weight \([1, 100]\)

  • colsample_bytree \([0.01, 1]\)

  • colsample_bylevel \([0.01, 1]\)

  • rate_drop \([0, 1]\)

  • skip_drop \([0, 1]\)