Comparing mlr3pipelines to other frameworks

Florian Pfisterer

Comparing mlr3pipelines to other frameworks

Below, we collected some examples, where mlr3pipelines is compared to different other software packages, such as mlr, recipes and sklearn.

Before diving deeper, we give a short introduction to PipeOps.

An introduction to “PipeOp”s

In this example, we create a ĺinear Pipeline. After scaling all input features, we rotate our data using principal component analysis. After this transformation, we use a simple Decision Tree learner for classification.

As exemplary data, we will use the “iris” classification task. This object contains the famous iris dataset and some meta-information, such as the target variable.

We quickly split our data into a train and a test set:

A Pipeline (or Graph) contains multiple pipeline operators (“PipeOp”s), where each PipeOp transforms the data when it flows through it. For this use case, we require 3 transformations:

A list of available PipeOps can be obtained from

First we define the required PipeOps:

A quick glance into a PipeOp

In order to get a better understanding of what the respective PipeOps do, we quickly look at one of them in detail:

The most important slots in a PipeOp are:

  • $train(): A function used to train the PipeOp.
  • $predict(): A function used to predict with the PipeOp.

The $train() and $predict() functions define the core functionality of our PipeOp. In many cases, in order to not leak information from the training set into the test set it is imperative to treat train and test data separately. For this we require a $train() function that learns the appropriate transformations from the training set and a $predict() function that applies the transformation on future data.

In the case of PipeOpPCA this means the following:

  • $train() learns a rotation matrix from its input and saves this matrix to an additional slot, $state. It returns the rotated input data stored in a new Task.
  • $predict() uses the rotation matrix stored in $state in order to rotate future, unseen data. It returns those in a new Task.

Constructing the Pipeline

We can now connect the PipeOps constructed earlier to a Pipeline. We can do this using the %>>% operator.

The result of this operation is a “Graph”. A Graph connects the input and output of each PipeOp to the following PipeOp. This allows us to specify linear processing pipelines. In this case, we connect the output of the scaling PipeOp to the input of the PCA PipeOp and the output of the PCA PipeOp to the input of PipeOpLearner.

We can now train the Graph using the iris Task.

When we now train the graph, the data flows through the graph as follows:

  • The Task flows into the PipeOpScale. The PipeOp scales each column in the data contained in the Task and returns a new Task that contains the scaled data to its output.
  • The scaled Task flows into the PipeOpPCA. PCA transforms the data and returns a (possibly smaller) Task, that contains the transformed data.
  • This transformed data then flows into the learner, in our case classif.rpart. It is then used to train the learner, and as a result saves a model that can be used to predict new data.

In order to predict on new data, we need to save the relevant transformations our data went through while training. As a result, each PipeOp saves a state, where information requried to appropriately transform future data is stored. In our case, this is mean and standard deviation of each column for PipeOpScale, the PCA rotation matrix for PipeOpPCA and the learned model for PipeOpLearner.

mlr3pipelines vs. mlr

In order to showcase the benefits of mlr3pipelines over mlr’s Wrapper mechanism, we compare the case of imputing missing values before filtering the top 2 features and then applying a learner.

While mlr wrappers are generally less verbose and require a little fewer code, this heavily inhibits flexibility. Wrappers can generally not process data in parallel, i.e.

mlr3pipelines

The fact that mlr’s wrappers have to be applied inside-out, i.e. in the reverse order is often confusing. This is way more straight-forward in mlr3pipelines, where we simply chain the different methods using %>>%. Additionally, mlr3pipelines offers way greater possibilities with respect to the kinds of Pipelines that can be constructed. In mlr3pipelines, we allow for the construction of parallel and conditional pipelines. This was previously not possible.

mlr3pipelines vs. sklearn.pipeline.Pipeline

In order to broaden the horizon, we compare to Python sklearn’s Pipeline methods. sklearn.pipeline.Pipeline sequentially applies a list of transforms before fitting a final estimator. Intermediate steps of the pipeline are transforms, i.e. steps that can learn from the data, but also transform the data while it flows through it. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps

It is thus conceptually very similar to mlr3pipelines. Similarly to mlr3pipelines, we can tune over a full Pipeline using various tuning methods. Pipeline mainly supports linear pipelines. This means, that it can execute parallel steps, such as for example Bagging, but it does not support conditional execution, i.e. PipeOpBranch.

At the same time, the different transforms in the pipeline can be cached, which makes tuning over the configuration space of a Pipeline more efficient, as executing some steps multiple times can be avoided.

We compare functionality available in both mlr3pipelines and sklearn.pipeline.Pipeline to give a comparison.

The following example obtained from the sklearn documentation showcases a Pipeline that first Selects a feature and performs PCA on the original data, concatenates the resulting datasets and applies a Support Vector Machine.

mlr3pipelines vs recipes

recipes is a new package, that covers some of the same applications steps as mlr3pipelines. Both packages feature the possiblility to connect different pre- and post-processing methods using a pipe-operator. As recipes package tightly integrates with the tidymodels ecosystem, much of the functionality integrated there can be used in recipes. We compare recipes to mlr3pipelines using an example from the recipes vignette.

The aim of the analysis is to predict whether customers pay back their loans given some information on the customers. In order to do this, we build a model that does the following: 1. It first imputes missing values using k-nearest neighbours 2. All factor variables are converted to numerics using dummy encoding 3. The data is first centered then scaled.

In order to validate the algorithm, data is first split into a train and test set using initial_split, training, testing. The recipe trained on the train data (see steps above) is then applied to the test data.