MLJ for Data Scientists in Two Hours

An end-to-end example using the Telco Churn dataset

To ensure code in this tutorial runs as shown, download the tutorial project folder and follow these instructions.

If you have questions or suggestions about this tutorial, please open an issue here.

An application of the MLJ toolbox to the Telco Customer Churn dataset, aimed at practicing data scientists new to MLJ (Machine Learning in Julia). This tutorial does not cover exploratory data analysis.

MLJ is a multi-paradigm machine learning toolbox (i.e., not just deep-learning).

For other MLJ learning resources see the Learning MLJ section of the manual.

Topics covered: Grabbing and preparing a dataset, basic fit/predict workflow, constructing a pipeline to include data pre-processing, estimating performance metrics, ROC curves, confusion matrices, feature importance, basic feature selection, controlling iterative models, hyper-parameter optimization (tuning).

Prerequisites for this tutorial. Previous experience building, evaluating, and optimizing machine learning models using scikit-learn, caret, MLR, weka, or similar tool. No previous experience with MLJ. Only fairly basic familiarity with Julia is required. Uses DataFrames.jl but in a minimal way (this cheatsheet may help).

Time. Between two and three hours, first time through.

Summary of methods and types introduced

codepurpose
OpenML.load(id)grab a dataset from OpenML.org
scitype(X)inspect the scientific type (scitype) of object X
schema(X)inspect the column scitypes (scientific types) of a table X
coerce(X, ...)fix column encodings to get appropriate scitypes
partition(data, frac1, frac2, ...; rng=...)vertically split data, which can be a table, vector or matrix
unpack(table, f1, f2, ...)horizontally split table based on conditions f1, f2, ..., applied to column names
@load ModelType pkg=...load code defining a model type
input_scitype(model)inspect the scitype that a model requires for features (inputs)
target_scitype(model)inspect the scitype that a model requires for the target (labels)
ContinuousEncoderbuilt-in model type for re-encoding all features as Continuous
model1 ∣> model2 ∣> ...combine multiple models into a pipeline
measures("under curve")list all measures (metrics) with string "under curve" in documentation
accuracy(yhat, y)compute accuracy of predictions yhat against ground truth observations y
auc(yhat, y), brier_loss(yhat, y)evaluate two probabilistic measures (yhat a vector of probability distributions)
machine(model, X, y)bind model to training data X (features) and y (target)
fit!(mach, rows=...)train machine using specified rows (observation indices)
predict(mach, rows=...),make in-sample model predictions given specified rows
predict(mach, Xnew)make predictions given new features Xnew
fitted_params(mach)inspect learned parameters
report(mach)inspect other outcomes of training
confmat(yhat, y)confusion matrix for predictions yhat and ground truth y
roc(yhat, y)compute points on the receiver-operator Characteristic
StratifiedCV(nfolds=6)6-fold stratified cross-validation resampling strategy
Holdout(fraction_train=0.7)holdout resampling strategy
evaluate(model, X, y; resampling=..., options...)estimate performance metrics for model using the data X, y
FeatureSelector()transformer for selecting features
Step(3)iteration control for stepping 3 iterations
NumberSinceBest(6), TimeLimit(60/5), InvalidValue()stopping criterion iteration controls
IteratedModel(model=..., controls=..., options...)wrap an iterative model in controls
range(model, :some_hyperparam, lower=..., upper=...)define a numeric range
RandomSearch()random search tuning strategy
TunedModel(model=..., tuning=..., options...)wrap the supervised model in specified tuning strategy

Warm up: Building a model for the iris dataset

Before turning to the Telco Customer Churn dataset, we very quickly build a predictive model for Fisher's well-known iris data set, as way of introducing the main actors in any MLJ workflow. Details that you don't fully grasp should become clearer in the Telco study.

This section is a condensed adaption of the Getting Started example in the MLJ documentation.

First, using the built-in iris dataset, we load and inspect the features X_iris (a table) and target variable y_iris (a vector):

using MLJ
X_iris, y_iris = @load_iris;
schema(X_iris)
┌──────────────┬────────────┬─────────┐
│ names        │ scitypes   │ types   │
├──────────────┼────────────┼─────────┤
│ sepal_length │ Continuous │ Float64 │
│ sepal_width  │ Continuous │ Float64 │
│ petal_length │ Continuous │ Float64 │
│ petal_width  │ Continuous │ Float64 │
└──────────────┴────────────┴─────────┘
y_iris[1:4]
4-element CategoricalArrays.CategoricalArray{String,1,UInt32}:
 "setosa"
 "setosa"
 "setosa"
 "setosa"
levels(y_iris)
3-element Vector{String}:
 "setosa"
 "versicolor"
 "virginica"

We load a decision tree model, from the package DecisionTree.jl:

DecisionTree = @load DecisionTreeClassifier pkg=DecisionTree # model type
model = DecisionTree(min_samples_split=5)                    # model instance
import MLJDecisionTreeInterface ✔
DecisionTreeClassifier(
  max_depth = -1, 
  min_samples_leaf = 1, 
  min_samples_split = 5, 
  min_purity_increase = 0.0, 
  n_subfeatures = 0, 
  post_prune = false, 
  merge_purity_threshold = 1.0, 
  display_depth = 5, 
  rng = Random._GLOBAL_RNG())

In MLJ, a model is just a container for hyper-parameters of some learning algorithm. It does not store learned parameters.

Next, we bind the model together with the available data in what's called a machine:

mach = machine(model, X_iris, y_iris)
Machine trained 0 times; caches data
  model: DecisionTreeClassifier(max_depth = -1, …)
  args: 
    1:	Source @858 ⏎ `ScientificTypesBase.Table{AbstractVector{ScientificTypesBase.Continuous}}`
    2:	Source @919 ⏎ `AbstractVector{ScientificTypesBase.Multiclass{3}}`

A machine is essentially just a model (ie, hyper-parameters) plus data, but it additionally stores learned parameters (the tree) once it is trained on some view of the data:

train_rows = vcat(1:60, 91:150); # some row indices (observations are rows not columns)
fit!(mach, rows=train_rows)
fitted_params(mach)
(tree = Decision Tree
Leaves: 5
Depth:  3,
 encoding = Dict{CategoricalArrays.CategoricalValue{String, UInt32}, UInt32}("virginica" => 0x00000003, "setosa" => 0x00000001, "versicolor" => 0x00000002),
 features = [:sepal_length, :sepal_width, :petal_length, :petal_width],)

A machine stores some other information enabling warm restart for some models, but we won't go into that here. You are allowed to access and mutate the model parameter:

mach.model.min_samples_split  = 10
fit!(mach, rows=train_rows) # re-train with new hyper-parameter
Machine trained 2 times; caches data
  model: DecisionTreeClassifier(max_depth = -1, …)
  args: 
    1:	Source @858 ⏎ `ScientificTypesBase.Table{AbstractVector{ScientificTypesBase.Continuous}}`
    2:	Source @919 ⏎ `AbstractVector{ScientificTypesBase.Multiclass{3}}`

Now we can make predictions on some other view of the data, as in

predict(mach, rows=71:73)
3-element CategoricalDistributions.UnivariateFiniteVector{ScientificTypesBase.Multiclass{3}, String, UInt32, Float64}:
 UnivariateFinite{ScientificTypesBase.Multiclass{3}}(setosa=>0.0, versicolor=>0.0, virginica=>1.0)
 UnivariateFinite{ScientificTypesBase.Multiclass{3}}(setosa=>0.0, versicolor=>1.0, virginica=>0.0)
 UnivariateFinite{ScientificTypesBase.Multiclass{3}}(setosa=>0.0, versicolor=>0.25, virginica=>0.75)

or on completely new data, as in

Xnew = (sepal_length = [5.1, 6.3],
        sepal_width = [3.0, 2.5],
        petal_length = [1.4, 4.9],
        petal_width = [0.3, 1.5])
yhat = predict(mach, Xnew)
2-element CategoricalDistributions.UnivariateFiniteVector{ScientificTypesBase.Multiclass{3}, String, UInt32, Float64}:
 UnivariateFinite{ScientificTypesBase.Multiclass{3}}(setosa=>1.0, versicolor=>0.0, virginica=>0.0)
 UnivariateFinite{ScientificTypesBase.Multiclass{3}}(setosa=>0.0, versicolor=>0.25, virginica=>0.75)

These are probabilistic predictions which can be manipulated using a widely adopted interface defined in the Distributions.jl package. For example, we can get raw probabilities like this:

pdf.(yhat, "virginica")
2-element Vector{Float64}:
 0.0
 0.75

A single prediction is displayed like this:

yhat[2]
              UnivariateFinite{ScientificTypesBase.Multiclass{3}} 
              ┌                                        ┐ 
       setosa ┤ 0.0                                      
   versicolor ┤■■■■■■■■■■■ 0.25                          
    virginica ┤■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.75   
              └                                        ┘ 

We now turn to the Telco dataset.

Getting the Telco data

import DataFrames
data = OpenML.load(42178) # data set from OpenML.org
df0 = DataFrames.DataFrame(data)
first(df0, 4)
4×21 DataFrame
 Row │ customerID  gender  SeniorCitizen  Partner  Dependents  tenure   PhoneService  MultipleLines     InternetService  OnlineSecurity  OnlineBackup  DeviceProtection  TechSupport  StreamingTV  StreamingMovies  Contract        PaperlessBilling  PaymentMethod              MonthlyCharges  TotalCharges  Churn
     │ String      String  Float64        String   String      Float64  String        String            String           String          String        String            String       String       String           String          String            String                     Float64         String        String
─────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1 │ 7590-VHVEG  Female            0.0  Yes      No              1.0  No            No phone service  DSL              No              Yes           No                No           No           No               Month-to-month  Yes               Electronic check                    29.85  29.85         No
   2 │ 5575-GNVDE  Male              0.0  No       No             34.0  Yes           No                DSL              Yes             No            Yes               No           No           No               One year        No                Mailed check                        56.95  1889.5        No
   3 │ 3668-QPYBK  Male              0.0  No       No              2.0  Yes           No                DSL              Yes             Yes           No                No           No           No               Month-to-month  Yes               Mailed check                        53.85  108.15        Yes
   4 │ 7795-CFOCW  Male              0.0  No       No             45.0  No            No phone service  DSL              Yes             No            Yes               Yes          No           No               One year        No                Bank transfer (automatic)           42.3   1840.75       No

The object of this tutorial is to build and evaluate supervised learning models to predict the Churn variable, a binary variable measuring customer retention, based on other variables that are relevant.

In the table, observations correspond to rows, and features to columns, which is the convention for representing all two-dimensional data in MLJ.

Type coercion

Introduces: scitype, schema, coerce

A "scientific type" or scitype indicates how MLJ will interpret data. For example, typeof(3.14) == Float64, while scitype(3.14) == Continuous and also scitype(3.14f0) == Continuous. In MLJ, model data requirements are articulated using scitypes.

Here are common "scalar" scitypes:

scalar scitypesb

There are also container scitypes. For example, the scitype of any N-dimensional array is AbstractArray{S, N}, where S is the scitype of the elements:

scitype(["cat", "mouse", "dog"])
AbstractVector{Textual} (alias for AbstractArray{ScientificTypesBase.Textual, 1})

The schema operator summarizes the column scitypes of a table:

schema(df0) |> DataFrames.DataFrame  # converted to DataFrame for better display
21×3 DataFrame
 Row │ names             scitypes    types
     │ Symbol            DataType    DataType
─────┼────────────────────────────────────────
   1 │ customerID        Textual     String
   2 │ gender            Textual     String
   3 │ SeniorCitizen     Continuous  Float64
   4 │ Partner           Textual     String
   5 │ Dependents        Textual     String
   6 │ tenure            Continuous  Float64
   7 │ PhoneService      Textual     String
   8 │ MultipleLines     Textual     String
   9 │ InternetService   Textual     String
  10 │ OnlineSecurity    Textual     String
  11 │ OnlineBackup      Textual     String
  12 │ DeviceProtection  Textual     String
  13 │ TechSupport       Textual     String
  14 │ StreamingTV       Textual     String
  15 │ StreamingMovies   Textual     String
  16 │ Contract          Textual     String
  17 │ PaperlessBilling  Textual     String
  18 │ PaymentMethod     Textual     String
  19 │ MonthlyCharges    Continuous  Float64
  20 │ TotalCharges      Textual     String
  21 │ Churn             Textual     String

All of the fields being interpreted as Textual are really something else, either Multiclass or, in the case of TotalCharges, Continuous. In fact, TotalCharges is mostly floats wrapped as strings. However, it needs special treatment because some elements consist of a single space, " ", which we'll treat as "0.0".

fix_blanks(v) = map(v) do x
    if x == " "
        return "0.0"
    else
        return x
    end
end

df0.TotalCharges = fix_blanks(df0.TotalCharges);

Coercing the TotalCharges type to ensure a Continuous scitype:

coerce!(df0, :TotalCharges => Continuous);

Coercing all remaining Textual data to Multiclass:

coerce!(df0, Textual => Multiclass);

Finally, we'll coerce our target variable Churn to be OrderedFactor, rather than Multiclass, to enable a reliable interpretation of metrics like "true positive rate". By convention, the first class is the negative one:

coerce!(df0, :Churn => OrderedFactor)
levels(df0.Churn) # to check order
2-element Vector{String}:
 "No"
 "Yes"

Re-inspecting the scitypes:

schema(df0) |> DataFrames.DataFrame
21×3 DataFrame
 Row │ names             scitypes          types
     │ Symbol            DataType          DataType
─────┼──────────────────────────────────────────────────────────────────────
   1 │ customerID        Multiclass{7043}  CategoricalValue{String, UInt32}
   2 │ gender            Multiclass{2}     CategoricalValue{String, UInt32}
   3 │ SeniorCitizen     Continuous        Float64
   4 │ Partner           Multiclass{2}     CategoricalValue{String, UInt32}
   5 │ Dependents        Multiclass{2}     CategoricalValue{String, UInt32}
   6 │ tenure            Continuous        Float64
   7 │ PhoneService      Multiclass{2}     CategoricalValue{String, UInt32}
   8 │ MultipleLines     Multiclass{3}     CategoricalValue{String, UInt32}
   9 │ InternetService   Multiclass{3}     CategoricalValue{String, UInt32}
  10 │ OnlineSecurity    Multiclass{3}     CategoricalValue{String, UInt32}
  11 │ OnlineBackup      Multiclass{3}     CategoricalValue{String, UInt32}
  12 │ DeviceProtection  Multiclass{3}     CategoricalValue{String, UInt32}
  13 │ TechSupport       Multiclass{3}     CategoricalValue{String, UInt32}
  14 │ StreamingTV       Multiclass{3}     CategoricalValue{String, UInt32}
  15 │ StreamingMovies   Multiclass{3}     CategoricalValue{String, UInt32}
  16 │ Contract          Multiclass{3}     CategoricalValue{String, UInt32}
  17 │ PaperlessBilling  Multiclass{2}     CategoricalValue{String, UInt32}
  18 │ PaymentMethod     Multiclass{4}     CategoricalValue{String, UInt32}
  19 │ MonthlyCharges    Continuous        Float64
  20 │ TotalCharges      Continuous        Float64
  21 │ Churn             OrderedFactor{2}  CategoricalValue{String, UInt32}

Preparing a holdout set for final testing

Introduces: partition

To reduce training times for the purposes of this tutorial, we're going to dump 90% of observations (after shuffling) and split off 30% of the remainder for use as a lock-and-throw-away-the-key holdout set:

df, df_test, df_dumped = partition(df0, 0.07, 0.03, # in ratios 7:3:90
                                   stratify=df0.Churn,
                                   rng=123);

The reader interested in including all data can instead do df, df_test = partition(df0, 0.7, rng=123).

Splitting data into target and features

Introduces: unpack

In the following call, the column with name Churn is copied over to a vector y, and every remaining column, except customerID (which contains no useful information) goes into a table X. Here Churn is the target variable for which we seek predictions, given new versions of the features X.

y, X = unpack(df, ==(:Churn), !=(:customerID));
schema(X).names
(:gender, :SeniorCitizen, :Partner, :Dependents, :tenure, :PhoneService, :MultipleLines, :InternetService, :OnlineSecurity, :OnlineBackup, :DeviceProtection, :TechSupport, :StreamingTV, :StreamingMovies, :Contract, :PaperlessBilling, :PaymentMethod, :MonthlyCharges, :TotalCharges)
intersect([:Churn, :customerID], schema(X).names)
Symbol[]

We'll do the same for the holdout data:

ytest, Xtest = unpack(df_test, ==(:Churn), !=(:customerID));

Loading a model and checking type requirements

Introduces: @load, input_scitype, target_scitype

For tools helping us to identify suitable models, see the Model Search section of the manual. We will build a gradient tree-boosting model, a popular first choice for structured data like we have here. Model code is contained in a third-party package called EvoTrees.jl which is loaded as follows:

Booster = @load EvoTreeClassifier pkg=EvoTrees
import EvoTrees ✔
EvoTrees.EvoTreeClassifier

Recall that a model is just a container for some algorithm's hyper-parameters. Let's create a Booster with default values for the hyper-parameters:

booster = Booster()
EvoTreeClassifier(
  loss = EvoTrees.Softmax(), 
  nrounds = 10, 
  λ = 0.0, 
  γ = 0.0, 
  η = 0.1, 
  max_depth = 5, 
  min_weight = 1.0, 
  rowsample = 1.0, 
  colsample = 1.0, 
  nbins = 64, 
  α = 0.5, 
  metric = :mlogloss, 
  rng = Random.MersenneTwister(123), 
  device = "cpu")

This model is appropriate for the kind of target variable we have because of the following passing test:

scitype(y) <: target_scitype(booster)
true

However, our features X cannot be directly used with booster:

scitype(X) <: input_scitype(booster)
false

As it turns out, this is because booster, like the majority of MLJ supervised models, expects the features to be Continuous. (With some experience, this can be gleaned from input_scitype(booster).) So we need categorical feature encoding, discussed next.

Building a model pipeline to incorporate feature encoding

Introduces: ContinuousEncoder, pipeline operator |>

The built-in ContinuousEncoder model transforms an arbitrary table to a table whose features are all Continuous (dropping any fields it does not know how to encode). In particular, all Multiclass features are one-hot encoded.

A pipeline is a stand-alone model that internally combines one or more models in a linear (non-branching) pipeline. Here's a pipeline that adds the ContinuousEncoder as a pre-processor to the gradient tree-boosting model above:

pipe = ContinuousEncoder() |> booster
ProbabilisticPipeline(
  continuous_encoder = ContinuousEncoder(
        drop_last = false, 
        one_hot_ordered_factors = false), 
  evo_tree_classifier = EvoTreeClassifier(
        loss = EvoTrees.Softmax(), 
        nrounds = 10, 
        λ = 0.0, 
        γ = 0.0, 
        η = 0.1, 
        max_depth = 5, 
        min_weight = 1.0, 
        rowsample = 1.0, 
        colsample = 1.0, 
        nbins = 64, 
        α = 0.5, 
        metric = :mlogloss, 
        rng = Random.MersenneTwister(123), 
        device = "cpu"), 
  cache = true)

Note that the component models appear as hyper-parameters of pipe. Pipelines are an implementation of a more general model composition interface provided by MLJ that advanced users may want to learn about.

From the above display, we see that component model hyper-parameters are now nested, but they are still accessible (important in hyper-parameter optimization):

pipe.evo_tree_classifier.max_depth
5

Evaluating the pipeline model's performance

Introduces: measures (function), measures: brier_loss, auc, accuracy; machine, fit!, predict, fitted_params, report, roc, resampling strategy StratifiedCV, evaluate, FeatureSelector

Without touching our test set Xtest, ytest, we will estimate the performance of our pipeline model, with default hyper-parameters, in two different ways:

Evaluating by hand. First, we'll do this "by hand" using the fit! and predict workflow illustrated for the iris data set above, using a holdout resampling strategy. At the same time we'll see how to generate a confusion matrix, ROC curve, and inspect feature importances.

Automated performance evaluation. Next we'll apply the more typical and convenient evaluate workflow, but using StratifiedCV (stratified cross-validation) which is more informative.

In any case, we need to choose some measures (metrics) to quantify the performance of our model. For a complete list of measures, one does measures(). Or we also can do:

measures("Brier")
2-element Vector{NamedTuple{(:name, :instances, :human_name, :target_scitype, :supports_weights, :supports_class_weights, :prediction_type, :orientation, :reports_each_observation, :aggregation, :is_feature_dependent, :docstring, :distribution_type)}}:
 (name = BrierLoss, instances = [brier_loss], ...)
 (name = BrierScore, instances = [brier_score], ...)

We will be primarily using brier_loss, but also auc (area under the ROC curve) and accuracy.

Evaluating by hand (with a holdout set)

Our pipeline model can be trained just like the decision tree model we built for the iris data set. Binding all non-test data to the pipeline model:

mach_pipe = machine(pipe, X, y)
Machine trained 0 times; caches data
  model: ProbabilisticPipeline(continuous_encoder = ContinuousEncoder(drop_last = false, …), …)
  args: 
    1:	Source @885 ⏎ `ScientificTypesBase.Table{Union{AbstractVector{ScientificTypesBase.Continuous}, AbstractVector{ScientificTypesBase.Multiclass{2}}, AbstractVector{ScientificTypesBase.Multiclass{4}}, AbstractVector{ScientificTypesBase.Multiclass{3}}}}`
    2:	Source @785 ⏎ `AbstractVector{ScientificTypesBase.OrderedFactor{2}}`

We already encountered the partition method above. Here we apply it to row indices, instead of data containers, as fit! and predict only need a view of the data to work.

train, validation = partition(1:length(y), 0.7)
fit!(mach_pipe, rows=train)
Machine trained 1 time; caches data
  model: ProbabilisticPipeline(continuous_encoder = ContinuousEncoder(drop_last = false, …), …)
  args: 
    1:	Source @885 ⏎ `ScientificTypesBase.Table{Union{AbstractVector{ScientificTypesBase.Continuous}, AbstractVector{ScientificTypesBase.Multiclass{2}}, AbstractVector{ScientificTypesBase.Multiclass{4}}, AbstractVector{ScientificTypesBase.Multiclass{3}}}}`
    2:	Source @785 ⏎ `AbstractVector{ScientificTypesBase.OrderedFactor{2}}`

We note in passing that we can access two kinds of information from a trained machine:

  • The learned parameters (eg, coefficients of a linear model): We use fitted_params(mach_pipe)

  • Other by-products of training (eg, feature importances): We use report(mach_pipe)

fp = fitted_params(mach_pipe);
keys(fp)
(:evo_tree_classifier, :continuous_encoder, :machines, :fitted_params_given_machine)

For example, we can check that the encoder did not actually drop any features:

Set(fp.continuous_encoder.features_to_keep) == Set(schema(X).names)
true

And, from the report, extract feature importances:

rpt = report(mach_pipe)
keys(rpt.evo_tree_classifier)
(:feature_importances,)
fi = rpt.evo_tree_classifier.feature_importances
feature_importance_table =
    (feature=Symbol.(first.(fi)), importance=last.(fi)) |> DataFrames.DataFrame
45×2 DataFrame
 Row │ feature                            importance
     │ Symbol                             Float64
─────┼────────────────────────────────────────────────
   1 │ tenure                             0.367021
   2 │ MonthlyCharges                     0.167477
   3 │ Contract__Month-to-month           0.16637
   4 │ TotalCharges                       0.0608734
   5 │ PaymentMethod__Bank transfer (au…  0.0381171
   6 │ Dependents__No                     0.0273874
   7 │ SeniorCitizen                      0.0262011
   8 │ PaperlessBilling__No               0.024691
   9 │ DeviceProtection__No               0.0234396
  10 │ StreamingMovies__Yes               0.0182277
  11 │ gender__Female                     0.0133198
  12 │ OnlineBackup__Yes                  0.010152
  13 │ MultipleLines__No                  0.0093157
  14 │ StreamingTV__No                    0.00695159
  15 │ DeviceProtection__Yes              0.00653241
  16 │ PaperlessBilling__Yes              0.00581502
  17 │ TechSupport__No                    0.00460489
  18 │ PaymentMethod__Credit card (auto…  0.00369404
  19 │ StreamingMovies__No                0.00352602
  20 │ Contract__One year                 0.00339499
  21 │ Contract__Two year                 0.00316092
  22 │ Partner__Yes                       0.00292232
  23 │ OnlineBackup__No                   0.0029127
  24 │ Dependents__Yes                    0.00225914
  25 │ Partner__No                        0.00145106
  26 │ gender__Male                       0.000181241
  27 │ PaymentMethod__Mailed check        1.17948e-6
  28 │ OnlineSecurity__No internet serv…  0.0
  29 │ OnlineSecurity__No                 0.0
  30 │ PaymentMethod__Electronic check    0.0
  31 │ OnlineSecurity__Yes                0.0
  32 │ InternetService__Fiber optic       0.0
  33 │ TechSupport__Yes                   0.0
  34 │ OnlineBackup__No internet service  0.0
  35 │ InternetService__No                0.0
  36 │ StreamingTV__Yes                   0.0
  37 │ PhoneService__No                   0.0
  38 │ PhoneService__Yes                  0.0
  39 │ DeviceProtection__No internet se…  0.0
  40 │ StreamingTV__No internet service   0.0
  41 │ StreamingMovies__No internet ser…  0.0
  42 │ TechSupport__No internet service   0.0
  43 │ MultipleLines__Yes                 0.0
  44 │ MultipleLines__No phone service    0.0
  45 │ InternetService__DSL               0.0

For models not reporting feature importances, we recommend the Shapley.jl package.

Returning to predictions and evaluations of our measures:

ŷ = predict(mach_pipe, rows=validation);
print(
    "Measurements:\n",
    "  brier loss: ", brier_loss(ŷ, y[validation]) |> mean, "\n",
    "  auc:        ", auc(ŷ, y[validation]),                "\n",
    "  accuracy:   ", accuracy(mode.(ŷ), y[validation])
)
Measurements:
  brier loss: 0.27683139589958233
  auc:        0.8171277997364954
  accuracy:   0.7972972972972973

Note that we need mode in the last case because accuracy expects point predictions, not probabilistic ones. (One can alternatively use predict_mode to generate the predictions.)

While we're here, lets also generate a confusion matrix and receiver-operator characteristic (ROC):

confmat(mode.(ŷ), y[validation])
              ┌───────────────────────────┐
              │       Ground Truth        │
┌─────────────┼─────────────┬─────────────┤
│  Predicted  │     No      │     Yes     │
├─────────────┼─────────────┼─────────────┤
│     No      │     101     │     16      │
├─────────────┼─────────────┼─────────────┤
│     Yes     │     14      │     17      │
└─────────────┴─────────────┴─────────────┘

Note: Importing the plotting package and calling the plotting functions for the first time can take a minute or so.

using Plots
roc_curve = roc(ŷ, y[validation])
plt = scatter(roc_curve, legend=false)
plot!(plt, xlab="false positive rate", ylab="true positive rate")
plot!([0, 1], [0, 1], linewidth=2, linestyle=:dash, color=:black)

Automated performance evaluation (more typical workflow)

We can also get performance estimates with a single call to the evaluate function, which also allows for more complicated resampling - in this case stratified cross-validation. To make this more comprehensive, we set repeats=3 below to make our cross-validation "Monte Carlo" (3 random size-6 partitions of the observation space, for a total of 18 folds) and set acceleration=CPUThreads() to parallelize the computation.

We choose a StratifiedCV resampling strategy; the complete list of options is here.

e_pipe = evaluate(pipe, X, y,
                  resampling=StratifiedCV(nfolds=6, rng=123),
                  measures=[brier_loss, auc, accuracy],
                  repeats=3,
                  acceleration=CPUThreads())
PerformanceEvaluation object with these fields:
  measure, measurement, operation, per_fold,
  per_observation, fitted_params_per_fold,
  report_per_fold, train_test_rows
Extract:
┌──────────────────┬─────────────┬──────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ measure          │ measurement │ operation    │ per_fold                                                                                                                       │
├──────────────────┼─────────────┼──────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ BrierLoss()      │ 0.313       │ predict      │ [0.311, 0.344, 0.291, 0.348, 0.357, 0.295, 0.285, 0.313, 0.324, 0.296, 0.284, 0.343, 0.316, 0.268, 0.306, 0.369, 0.287, 0.298] │
│ AreaUnderCurve() │ 0.788       │ predict      │ [0.778, 0.743, 0.802, 0.774, 0.724, 0.808, 0.832, 0.788, 0.743, 0.82, 0.862, 0.692, 0.761, 0.845, 0.827, 0.734, 0.833, 0.824]  │
│ Accuracy()       │ 0.783       │ predict_mode │ [0.771, 0.78, 0.78, 0.78, 0.732, 0.817, 0.795, 0.78, 0.78, 0.793, 0.805, 0.768, 0.783, 0.841, 0.78, 0.744, 0.793, 0.768]       │
└──────────────────┴─────────────┴──────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

(There is also a version of evaluate for machines. Query the evaluate and evaluate! doc-strings to learn more about these functions and what the PerformanceEvaluation object e_pipe records.)

While less than ideal, let's adopt the common practice of using the standard error of a cross-validation score as an estimate of the uncertainty of a performance measure's expected value. Here's a utility function to calculate 95% confidence intervals for our performance estimates based on this practice, and it's application to the current evaluation:

using Measurements
function confidence_intervals(e)
    factor = 2.0 # to get level of 95%
    measure = e.measure
    nfolds = length(e.per_fold[1])
    measurement = [e.measurement[j] ± factor*std(e.per_fold[j])/sqrt(nfolds - 1)
                   for j in eachindex(measure)]
    table = (measure=measure, measurement=measurement)
    return DataFrames.DataFrame(table)
end

confidence_intervals_basic_model = confidence_intervals(e_pipe)
3×2 DataFrame
 Row │ measure           measurement
     │ Measure           Measuremen…
─────┼───────────────────────────────
   1 │ BrierLoss()       0.313±0.014
   2 │ AreaUnderCurve()  0.788±0.023
   3 │ Accuracy()        0.783±0.012

Filtering out unimportant features

Introduces: FeatureSelector

Before continuing, we'll modify our pipeline to drop those features with low feature importance, to speed up later optimization:

unimportant_features = filter(:importance => <(0.005), feature_importance_table).feature

pipe2 = ContinuousEncoder() |>
    FeatureSelector(features=unimportant_features, ignore=true) |> booster
ProbabilisticPipeline(
  continuous_encoder = ContinuousEncoder(
        drop_last = false, 
        one_hot_ordered_factors = false), 
  feature_selector = FeatureSelector(
        features = [:TechSupport__No, Symbol("PaymentMethod__Credit card (automatic)"), :StreamingMovies__No, Symbol("Contract__One year"), Symbol("Contract__Two year"), :Partner__Yes, :OnlineBackup__No, :Dependents__Yes, :Partner__No, :gender__Male, Symbol("PaymentMethod__Mailed check"), Symbol("OnlineSecurity__No internet service"), :OnlineSecurity__No, Symbol("PaymentMethod__Electronic check"), :OnlineSecurity__Yes, Symbol("InternetService__Fiber optic"), :TechSupport__Yes, Symbol("OnlineBackup__No internet service"), :InternetService__No, :StreamingTV__Yes, :PhoneService__No, :PhoneService__Yes, Symbol("DeviceProtection__No internet service"), Symbol("StreamingTV__No internet service"), Symbol("StreamingMovies__No internet service"), Symbol("TechSupport__No internet service"), :MultipleLines__Yes, Symbol("MultipleLines__No phone service"), :InternetService__DSL], 
        ignore = true), 
  evo_tree_classifier = EvoTreeClassifier(
        loss = EvoTrees.Softmax(), 
        nrounds = 10, 
        λ = 0.0, 
        γ = 0.0, 
        η = 0.1, 
        max_depth = 5, 
        min_weight = 1.0, 
        rowsample = 1.0, 
        colsample = 1.0, 
        nbins = 64, 
        α = 0.5, 
        metric = :mlogloss, 
        rng = Random.MersenneTwister(123, (0, 86172, 85170, 780)), 
        device = "cpu"), 
  cache = true)

Wrapping our iterative model in control strategies

Introduces: control strategies: Step, NumberSinceBest, TimeLimit, InvalidValue, model wrapper IteratedModel, resampling strategy: Holdout

We want to optimize the hyper-parameters of our model. Since our model is iterative, these parameters include the (nested) iteration parameter pipe.evo_tree_classifier.nrounds. Sometimes this parameter is optimized first, fixed, and then maybe optimized again after the other parameters. Here we take a more principled approach, wrapping our model in a control strategy that makes it "self-iterating". The strategy applies a stopping criterion to out-of-sample estimates of the model performance, constructed using an internally constructed holdout set. In this way, we avoid some data hygiene issues, and, when we subsequently optimize other parameters, we will always being using an optimal number of iterations.

Note that this approach can be applied to any iterative MLJ model, eg, the neural network models provided by MLJFlux.jl.

First, we select appropriate controls from this list:

controls = [
    Step(1),              # to increment iteration parameter (`pipe.nrounds`)
    NumberSinceBest(4),   # main stopping criterion
    TimeLimit(2/3600),    # never train more than 2 sec
    InvalidValue()        # stop if NaN or ±Inf encountered
]
4-element Vector{Any}:
 IterationControl.Step(1)
 EarlyStopping.NumberSinceBest(4)
 EarlyStopping.TimeLimit(Dates.Millisecond(2000))
 EarlyStopping.InvalidValue()

Now we wrap our pipeline model using the IteratedModel wrapper, being sure to specify the measure on which internal estimates of the out-of-sample performance will be based:

iterated_pipe = IteratedModel(model=pipe2,
                              controls=controls,
                              measure=brier_loss,
                              resampling=Holdout(fraction_train=0.7))
ProbabilisticIteratedModel(
  model = ProbabilisticPipeline(
        continuous_encoder = ContinuousEncoder(drop_last = false, …), 
        feature_selector = FeatureSelector(features = [:TechSupport__No, Symbol("PaymentMethod__Credit card (automatic)"), :StreamingMovies__No, Symbol("Contract__One year"), Symbol("Contract__Two year"), :Partner__Yes, :OnlineBackup__No, :Dependents__Yes, :Partner__No, :gender__Male, Symbol("PaymentMethod__Mailed check"), Symbol("OnlineSecurity__No internet service"), :OnlineSecurity__No, Symbol("PaymentMethod__Electronic check"), :OnlineSecurity__Yes, Symbol("InternetService__Fiber optic"), :TechSupport__Yes, Symbol("OnlineBackup__No internet service"), :InternetService__No, :StreamingTV__Yes, :PhoneService__No, :PhoneService__Yes, Symbol("DeviceProtection__No internet service"), Symbol("StreamingTV__No internet service"), Symbol("StreamingMovies__No internet service"), Symbol("TechSupport__No internet service"), :MultipleLines__Yes, Symbol("MultipleLines__No phone service"), :InternetService__DSL], …), 
        evo_tree_classifier = EvoTreeClassifier(loss = EvoTrees.Softmax(), …), 
        cache = true), 
  controls = Any[IterationControl.Step(1), EarlyStopping.NumberSinceBest(4), EarlyStopping.TimeLimit(Dates.Millisecond(2000)), EarlyStopping.InvalidValue()], 
  resampling = Holdout(
        fraction_train = 0.7, 
        shuffle = false, 
        rng = Random._GLOBAL_RNG()), 
  measure = BrierLoss(), 
  weights = nothing, 
  class_weights = nothing, 
  operation = MLJModelInterface.predict, 
  retrain = false, 
  check_measure = true, 
  iteration_parameter = nothing, 
  cache = true)

We've set resampling=Holdout(fraction_train=0.7) to arrange that data attached to our model should be internally split into a train set (70%) and a holdout set (30%) for determining the out-of-sample estimate of the Brier loss.

For demonstration purposes, let's bind iterated_model to all data not in our don't-touch holdout set, and train on all of that data:

mach_iterated_pipe = machine(iterated_pipe, X, y)
fit!(mach_iterated_pipe);

To recap, internally this training is split into two separate steps:

  • A controlled iteration step, training on the holdout set, with the total number of iterations determined by the specified stopping criteria (based on the out-of-sample performance estimates)

  • A final step that trains the atomic model on all available data using the number of iterations determined in the first step. Calling predict on mach_iterated_pipe means using the learned parameters of the second step.

Hyper-parameter optimization (model tuning)

Introduces: range, model wrapper TunedModel, RandomSearch

We now turn to hyper-parameter optimization. A tool not discussed here is the learning_curve function, which can be useful when wanting to visualize the effect of changes to a single hyper-parameter (which could be an iteration parameter). See, for example, this section of the manual or this tutorial.

Fine tuning the hyper-parameters of a gradient booster can be somewhat involved. Here we settle for simultaneously optimizing two key parameters: max_depth and η (learning_rate).

Like iteration control, model optimization in MLJ is implemented as a model wrapper, called TunedModel. After wrapping a model in a tuning strategy and binding the wrapped model to data in a machine called mach, calling fit!(mach) instigates a search for optimal model hyperparameters, within a specified range, and then uses all supplied data to train the best model. To predict using that model, one then calls predict(mach, Xnew). In this way the wrapped model may be viewed as a "self-tuning" version of the unwrapped model. That is, wrapping the model simply transforms certain hyper-parameters into learned parameters (just as IteratedModel does for an iteration parameter).

To start with, we define ranges for the parameters of interest. Since these parameters are nested, let's force a display of our model to a larger depth:

show(iterated_pipe, 2)
ProbabilisticIteratedModel(
  model = ProbabilisticPipeline(
        continuous_encoder = ContinuousEncoder(
              drop_last = false, 
              one_hot_ordered_factors = false), 
        feature_selector = FeatureSelector(
              features = [:TechSupport__No, Symbol("PaymentMethod__Credit card (automatic)"), :StreamingMovies__No, Symbol("Contract__One year"), Symbol("Contract__Two year"), :Partner__Yes, :OnlineBackup__No, :Dependents__Yes, :Partner__No, :gender__Male, Symbol("PaymentMethod__Mailed check"), Symbol("OnlineSecurity__No internet service"), :OnlineSecurity__No, Symbol("PaymentMethod__Electronic check"), :OnlineSecurity__Yes, Symbol("InternetService__Fiber optic"), :TechSupport__Yes, Symbol("OnlineBackup__No internet service"), :InternetService__No, :StreamingTV__Yes, :PhoneService__No, :PhoneService__Yes, Symbol("DeviceProtection__No internet service"), Symbol("StreamingTV__No internet service"), Symbol("StreamingMovies__No internet service"), Symbol("TechSupport__No internet service"), :MultipleLines__Yes, Symbol("MultipleLines__No phone service"), :InternetService__DSL], 
              ignore = true), 
        evo_tree_classifier = EvoTreeClassifier(
              loss = EvoTrees.Softmax(), 
              nrounds = 10, 
              λ = 0.0, 
              γ = 0.0, 
              η = 0.1, 
              max_depth = 5, 
              min_weight = 1.0, 
              rowsample = 1.0, 
              colsample = 1.0, 
              nbins = 64, 
              α = 0.5, 
              metric = :mlogloss, 
              rng = Random.MersenneTwister(123, (0, 86172, 85170, 780)), 
              device = "cpu"), 
        cache = true), 
  controls = Any[IterationControl.Step(1), EarlyStopping.NumberSinceBest(4), EarlyStopping.TimeLimit(Dates.Millisecond(2000)), EarlyStopping.InvalidValue()], 
  resampling = Holdout(
        fraction_train = 0.7, 
        shuffle = false, 
        rng = Random._GLOBAL_RNG()), 
  measure = BrierLoss(), 
  weights = nothing, 
  class_weights = nothing, 
  operation = MLJModelInterface.predict, 
  retrain = false, 
  check_measure = true, 
  iteration_parameter = nothing, 
  cache = true)
p1 = :(model.evo_tree_classifier.η)
p2 = :(model.evo_tree_classifier.max_depth)

r1 = range(iterated_pipe, p1, lower=-2, upper=-0.5, scale=x->10^x)
r2 = range(iterated_pipe, p2, lower=2, upper=6)
NumericRange(2 ≤ model.evo_tree_classifier.max_depth ≤ 6; origin=4.0, unit=2.0)

Nominal ranges are defined by specifying values instead of lower and upper.

Next, we choose an optimization strategy from this list:

tuning = RandomSearch(rng=123)
RandomSearch(
  bounded = Distributions.Uniform, 
  positive_unbounded = Distributions.Gamma, 
  other = Distributions.Normal, 
  rng = Random.MersenneTwister(123))

Then we wrap the model, specifying a resampling strategy and a measure, as we did for IteratedModel. In fact, we can include a battery of measures; by default, optimization is with respect to performance estimates based on the first measure, but estimates for all measures can be accessed from the model's report.

The keyword n specifies the total number of models (sets of hyper-parameters) to evaluate.

tuned_iterated_pipe = TunedModel(model=iterated_pipe,
                                 range=[r1, r2],
                                 tuning=tuning,
                                 measures=[brier_loss, auc, accuracy],
                                 resampling=StratifiedCV(nfolds=6, rng=123),
                                 acceleration=CPUThreads(),
                                 n=40)
ProbabilisticTunedModel(
  model = ProbabilisticIteratedModel(
        model = ProbabilisticPipeline(continuous_encoder = ContinuousEncoder(drop_last = false, …), …), 
        controls = Any[IterationControl.Step(1), EarlyStopping.NumberSinceBest(4), EarlyStopping.TimeLimit(Dates.Millisecond(2000)), EarlyStopping.InvalidValue()], 
        resampling = Holdout(fraction_train = 0.7, …), 
        measure = BrierLoss(), 
        weights = nothing, 
        class_weights = nothing, 
        operation = MLJModelInterface.predict, 
        retrain = false, 
        check_measure = true, 
        iteration_parameter = nothing, 
        cache = true), 
  tuning = RandomSearch(
        bounded = Distributions.Uniform, 
        positive_unbounded = Distributions.Gamma, 
        other = Distributions.Normal, 
        rng = Random.MersenneTwister(123)), 
  resampling = StratifiedCV(
        nfolds = 6, 
        shuffle = true, 
        rng = Random.MersenneTwister(123)), 
  measure = MLJBase.Measure[BrierLoss(), AreaUnderCurve(), Accuracy()], 
  weights = nothing, 
  operation = nothing, 
  range = MLJBase.NumericRange{T, MLJBase.Bounded} where T[transformed NumericRange(-2.0 ≤ model.evo_tree_classifier.η ≤ -0.5; origin=-1.25, unit=0.75), NumericRange(2 ≤ model.evo_tree_classifier.max_depth ≤ 6; origin=4.0, unit=2.0)], 
  selection_heuristic = MLJTuning.NaiveSelection(nothing), 
  train_best = true, 
  repeats = 1, 
  n = 40, 
  acceleration = ComputationalResources.CPUThreads{Int64}(1), 
  acceleration_resampling = ComputationalResources.CPU1{Nothing}(nothing), 
  check_measure = true, 
  cache = true)

To save time, we skip the repeats here.

Binding our final model to data and training:

mach_tuned_iterated_pipe = machine(tuned_iterated_pipe, X, y)
fit!(mach_tuned_iterated_pipe)
Machine trained 1 time; caches data
  model: ProbabilisticTunedModel(model = ProbabilisticIteratedModel(model = ProbabilisticPipeline(continuous_encoder = ContinuousEncoder(drop_last = false, …), …), …), …)
  args: 
    1:	Source @007 ⏎ `ScientificTypesBase.Table{Union{AbstractVector{ScientificTypesBase.Continuous}, AbstractVector{ScientificTypesBase.Multiclass{2}}, AbstractVector{ScientificTypesBase.Multiclass{4}}, AbstractVector{ScientificTypesBase.Multiclass{3}}}}`
    2:	Source @099 ⏎ `AbstractVector{ScientificTypesBase.OrderedFactor{2}}`

As explained above, the training we have just performed was split internally into two separate steps:

  • A step to determine the parameter values that optimize the aggregated cross-validation scores

  • A final step that trains the optimal model on all available data. Future predictions predict(mach_tuned_iterated_pipe, ...) are based on this final training step.

From report(mach_tuned_iterated_pipe) we can extract details about the optimization procedure. For example:

rpt2 = report(mach_tuned_iterated_pipe);
best_booster = rpt2.best_model.model.evo_tree_classifier
EvoTreeClassifier(
  loss = EvoTrees.Softmax(), 
  nrounds = 10, 
  λ = 0.0, 
  γ = 0.0, 
  η = 0.14682597477521467, 
  max_depth = 2, 
  min_weight = 1.0, 
  rowsample = 1.0, 
  colsample = 1.0, 
  nbins = 64, 
  α = 0.5, 
  metric = :mlogloss, 
  rng = Random.MersenneTwister(123, (0, 86172, 85170, 780)), 
  device = "cpu")
print(
    "Optimal hyper-parameters: \n",
    "  max_depth: ", best_booster.max_depth, "\n",
    "  η:         ", best_booster.η
)
Optimal hyper-parameters: 
  max_depth: 2
  η:         0.14682597477521467

Using the confidence_intervals function we defined earlier:

e_best = rpt2.best_history_entry
confidence_intervals(e_best)
3×2 DataFrame
 Row │ measure           measurement
     │ Measure           Measuremen…
─────┼───────────────────────────────
   1 │ BrierLoss()       0.294±0.031
   2 │ AreaUnderCurve()  0.817±0.051
   3 │ Accuracy()        0.801±0.023

Digging a little deeper, we can learn what stopping criterion was applied in the case of the optimal model, and how many iterations were required:

rpt2.best_report.controls |> collect
4-element Vector{Tuple{Any, NamedTuple}}:
 (IterationControl.Step(1), (new_iterations = 26,))
 (EarlyStopping.NumberSinceBest(4), (done = true, log = "Stop triggered by EarlyStopping.NumberSinceBest(4) stopping criterion. "))
 (EarlyStopping.TimeLimit(Dates.Millisecond(2000)), (done = false, log = ""))
 (EarlyStopping.InvalidValue(), (done = false, log = ""))

Finally, we can visualize the optimization results:

plot(mach_tuned_iterated_pipe, size=(600,450))

Saving our model

Introduces: MLJ.save

Here's how to serialize our final, trained self-iterating, self-tuning pipeline machine using Julia's native serializer (see the manual for more options):

MLJ.save("tuned_iterated_pipe.jls", mach_tuned_iterated_pipe)

We'll deserialize this in "Testing the final model" below.

Final performance estimate

Finally, to get an even more accurate estimate of performance, we can evaluate our model using stratified cross-validation and all the data attached to our machine. Because this evaluation implies nested resampling, this computation takes quite a bit longer than the previous one (which is being repeated six times, using 5/6th of the data each time):

e_tuned_iterated_pipe = evaluate(tuned_iterated_pipe, X, y,
                                 resampling=StratifiedCV(nfolds=6, rng=123),
                                 measures=[brier_loss, auc, accuracy])
PerformanceEvaluation object with these fields:
  measure, measurement, operation, per_fold,
  per_observation, fitted_params_per_fold,
  report_per_fold, train_test_rows
Extract:
┌──────────────────┬─────────────┬──────────────┬────────────────────────────────────────────┐
│ measure          │ measurement │ operation    │ per_fold                                   │
├──────────────────┼─────────────┼──────────────┼────────────────────────────────────────────┤
│ BrierLoss()      │ 0.293       │ predict      │ [0.297, 0.334, 0.265, 0.278, 0.322, 0.26]  │
│ AreaUnderCurve() │ 0.816       │ predict      │ [0.804, 0.749, 0.823, 0.858, 0.772, 0.892] │
│ Accuracy()       │ 0.801       │ predict_mode │ [0.807, 0.793, 0.841, 0.793, 0.768, 0.805] │
└──────────────────┴─────────────┴──────────────┴────────────────────────────────────────────┘
confidence_intervals(e_tuned_iterated_pipe)
3×2 DataFrame
 Row │ measure           measurement
     │ Measure           Measuremen…
─────┼───────────────────────────────
   1 │ BrierLoss()       0.293±0.027
   2 │ AreaUnderCurve()  0.816±0.048
   3 │ Accuracy()        0.801±0.022

For comparison, here are the confidence intervals for the basic pipeline model (no feature selection and default hyperparameters):

confidence_intervals_basic_model
3×2 DataFrame
 Row │ measure           measurement
     │ Measure           Measuremen…
─────┼───────────────────────────────
   1 │ BrierLoss()       0.313±0.014
   2 │ AreaUnderCurve()  0.788±0.023
   3 │ Accuracy()        0.783±0.012

As each pair of intervals overlap, it's doubtful the small changes here can be assigned statistical significance. Default booster hyper-parameters do a pretty good job.

Testing the final model

We now determine the performance of our model on our lock-and-throw-away-the-key holdout set. To demonstrate deserialization, we'll pretend we're in a new Julia session (but have called import/using on the same packages). Then the following should suffice to recover our model trained under "Hyper-parameter optimization" above:

mach_restored = machine("tuned_iterated_pipe.jls")
Machine trained 1 time; caches data
  model: ProbabilisticTunedModel(model = ProbabilisticIteratedModel(model = ProbabilisticPipeline(continuous_encoder = ContinuousEncoder(drop_last = false, …), …), …), …)
  args: 

We compute predictions on the holdout set:

ŷ_tuned = predict(mach_restored, Xtest);
ŷ_tuned[1]
       UnivariateFinite{ScientificTypesBase.Multiclass{2}} 
       ┌                                        ┐ 
    No ┤■■■■■■■■■■■■■■■■■ 0.4644309597323305      
   Yes ┤■■■■■■■■■■■■■■■■■■■■ 0.5355690402676695   
       └                                        ┘ 

And can compute the final performance measures:

print(
    "Tuned model measurements on test:\n",
    "  brier loss: ", brier_loss(ŷ_tuned, ytest) |> mean, "\n",
    "  auc:        ", auc(ŷ_tuned, ytest),                "\n",
    "  accuracy:   ", accuracy(mode.(ŷ_tuned), ytest)
)
Tuned model measurements on test:
  brier loss: 0.2725784645587789
  auc:        0.8470046082949308
  accuracy:   0.8056872037914692

For comparison, here's the performance for the basic pipeline model

mach_basic = machine(pipe, X, y)
fit!(mach_basic, verbosity=0)

ŷ_basic = predict(mach_basic, Xtest);

print(
    "Basic model measurements on test set:\n",
    "  brier loss: ", brier_loss(ŷ_basic, ytest) |> mean, "\n",
    "  auc:        ", auc(ŷ_basic, ytest),                "\n",
    "  accuracy:   ", accuracy(mode.(ŷ_basic), ytest)
)
Basic model measurements on test set:
  brier loss: 0.3059458610206901
  auc:        0.8120967741935484
  accuracy:   0.7725118483412322