The main goal of mlrMBO
is to optimize expensive black-box functions by model-based optimization (aka Bayesian optimization) and to provide a unified interface for different optimization tasks and algorithmic MBO variants. Supported are, among other things:
This vignette gives a brief overview of the features of mlrMBO
. A more detailed documentation can be found on: http://mlr-org.github.io/mlrMBO/.
Installing mlrMBO
will also install and load the dependencies mlr
, ParamHelpers
, and smoof
. For this tutorial, you also need the additional packages DiceKriging
and randomForest
.
library(mlrMBO)
smoof
.mbo()
.As a simple example we minimize a cosine-like function with an initial design of 5 points and 10 sequential MBO iterations. Thus, the optimizer is allowed 15 evaluations of the objective function in total to approximate the optimum.
Instead of manually defining the objective, we use the smoof package which offers many toy and benchmark functions for optimization.
obj.fun = makeCosineMixtureFunction(1)
obj.fun = convertToMinimization(obj.fun)
print(obj.fun)
## Single-objective function
## Name: Cosine Mixture Function
## Description: no description
## Tags: single-objective, discontinuous, non-differentiable, separable, scalable, multimodal
## Noisy: FALSE
## Minimize: TRUE
## Constraints: TRUE
## Number of parameters: 1
## Type len Def Constr Req Tunable Trafo
## x numericvector 1 - -1 to 1 - TRUE -
## Global optimum objective value of -0.1000 at
## x
## 1 0
ggplot2::autoplot(obj.fun)