Common MLJ Workflows
Data ingestion
import RDatasets
channing = RDatasets.dataset("boot", "channing")
first(channing, 4)| Sex | Entry | Exit | Time | Cens | |
|---|---|---|---|---|---|
| Cat… | Int32 | Int32 | Int32 | Int32 | |
| 1 | Male | 782 | 909 | 127 | 1 |
| 2 | Male | 1020 | 1128 | 108 | 1 |
| 3 | Male | 856 | 969 | 113 | 1 |
| 4 | Male | 915 | 957 | 42 | 1 |
Inspecting metadata, including column scientific types:
schema(channing)┌─────────┬────────────────────────────────┬───────────────┐
│ _.names │ _.types │ _.scitypes │
├─────────┼────────────────────────────────┼───────────────┤
│ Sex │ CategoricalValue{String,UInt8} │ Multiclass{2} │
│ Entry │ Int32 │ Count │
│ Exit │ Int32 │ Count │
│ Time │ Int32 │ Count │
│ Cens │ Int32 │ Count │
└─────────┴────────────────────────────────┴───────────────┘
_.nrows = 462
Unpacking data and correcting for wrong scitypes:
y, X = unpack(channing,
==(:Exit), # y is the :Exit column
!=(:Time); # X is the rest, except :Time
:Exit=>Continuous,
:Entry=>Continuous,
:Cens=>Multiclass)
first(X, 4)| Sex | Entry | Cens | |
|---|---|---|---|
| Cat… | Float64 | Cat… | |
| 1 | Male | 782.0 | 1 |
| 2 | Male | 1020.0 | 1 |
| 3 | Male | 856.0 | 1 |
| 4 | Male | 915.0 | 1 |
Note: Before julia 1.2, replace !=(:Time) with col -> col != :Time.
y[1:4]4-element Array{Float64,1}:
909.0
1128.0
969.0
957.0Loading a built-in supervised dataset:
X, y = @load_iris;
selectrows(X, 1:4) # selectrows works for any Tables.jl table(sepal_length = [5.1, 4.9, 4.7, 4.6],
sepal_width = [3.5, 3.0, 3.2, 3.1],
petal_length = [1.4, 1.4, 1.3, 1.5],
petal_width = [0.2, 0.2, 0.2, 0.2],)y[1:4]4-element CategoricalArray{String,1,UInt32}:
"setosa"
"setosa"
"setosa"
"setosa"Model search
Reference: Model Search
Searching for a supervised model:
X, y = @load_boston
models(matching(X, y))54-element Array{NamedTuple{(:name, :package_name, :is_supervised, :docstring, :hyperparameter_ranges, :hyperparameter_types, :hyperparameters, :implemented_methods, :is_pure_julia, :is_wrapper, :load_path, :package_license, :package_url, :package_uuid, :prediction_type, :supports_online, :supports_weights, :input_scitype, :target_scitype, :output_scitype),T} where T<:Tuple,1}:
(name = ARDRegressor, package_name = ScikitLearn, ... )
(name = AdaBoostRegressor, package_name = ScikitLearn, ... )
(name = BaggingRegressor, package_name = ScikitLearn, ... )
(name = BayesianRidgeRegressor, package_name = ScikitLearn, ... )
(name = ConstantRegressor, package_name = MLJModels, ... )
(name = DecisionTreeRegressor, package_name = DecisionTree, ... )
(name = DeterministicConstantRegressor, package_name = MLJModels, ... )
(name = DummyRegressor, package_name = ScikitLearn, ... )
(name = ElasticNetCVRegressor, package_name = ScikitLearn, ... )
(name = ElasticNetRegressor, package_name = MLJLinearModels, ... )
⋮
(name = RidgeRegressor, package_name = MultivariateStats, ... )
(name = RidgeRegressor, package_name = ScikitLearn, ... )
(name = RobustRegressor, package_name = MLJLinearModels, ... )
(name = SGDRegressor, package_name = ScikitLearn, ... )
(name = SVMLinearRegressor, package_name = ScikitLearn, ... )
(name = SVMNuRegressor, package_name = ScikitLearn, ... )
(name = SVMRegressor, package_name = ScikitLearn, ... )
(name = TheilSenRegressor, package_name = ScikitLearn, ... )
(name = XGBoostRegressor, package_name = XGBoost, ... ) models(matching(X, y))[6]CART decision tree regressor.
→ based on [DecisionTree](https://github.com/bensadeghi/DecisionTree.jl).
→ do `@load DecisionTreeRegressor pkg="DecisionTree"` to use the model.
→ do `?DecisionTreeRegressor` for documentation.
(name = "DecisionTreeRegressor",
package_name = "DecisionTree",
is_supervised = true,
docstring = "CART decision tree regressor.\n→ based on [DecisionTree](https://github.com/bensadeghi/DecisionTree.jl).\n→ do `@load DecisionTreeRegressor pkg=\"DecisionTree\"` to use the model.\n→ do `?DecisionTreeRegressor` for documentation.",
hyperparameter_ranges = (nothing, nothing, nothing, nothing, nothing, nothing, nothing),
hyperparameter_types = ("Int64", "Int64", "Int64", "Float64", "Int64", "Bool", "Float64"),
hyperparameters = (:max_depth, :min_samples_leaf, :min_samples_split, :min_purity_increase, :n_subfeatures, :post_prune, :merge_purity_threshold),
implemented_methods = Symbol[:clean!, :fit, :fitted_params, :predict],
is_pure_julia = true,
is_wrapper = false,
load_path = "MLJModels.DecisionTree_.DecisionTreeRegressor",
package_license = "MIT",
package_url = "https://github.com/bensadeghi/DecisionTree.jl",
package_uuid = "7806a523-6efd-50cb-b5f6-3fa6f1930dbb",
prediction_type = :deterministic,
supports_online = false,
supports_weights = false,
input_scitype = Table{_s23} where _s23<:Union{AbstractArray{_s25,1} where _s25<:Continuous, AbstractArray{_s25,1} where _s25<:Count, AbstractArray{_s25,1} where _s25<:OrderedFactor},
target_scitype = AbstractArray{Continuous,1},
output_scitype = Unknown,)More refined searches:
models() do model
matching(model, X, y) &&
model.prediction_type == :deterministic &&
model.is_pure_julia
end15-element Array{NamedTuple{(:name, :package_name, :is_supervised, :docstring, :hyperparameter_ranges, :hyperparameter_types, :hyperparameters, :implemented_methods, :is_pure_julia, :is_wrapper, :load_path, :package_license, :package_url, :package_uuid, :prediction_type, :supports_online, :supports_weights, :input_scitype, :target_scitype, :output_scitype),T} where T<:Tuple,1}:
(name = DecisionTreeRegressor, package_name = DecisionTree, ... )
(name = DeterministicConstantRegressor, package_name = MLJModels, ... )
(name = ElasticNetRegressor, package_name = MLJLinearModels, ... )
(name = EvoTreeRegressor, package_name = EvoTrees, ... )
(name = HuberRegressor, package_name = MLJLinearModels, ... )
(name = KNNRegressor, package_name = NearestNeighbors, ... )
(name = LADRegressor, package_name = MLJLinearModels, ... )
(name = LassoRegressor, package_name = MLJLinearModels, ... )
(name = LinearRegressor, package_name = MLJLinearModels, ... )
(name = NeuralNetworkRegressor, package_name = MLJFlux, ... )
(name = QuantileRegressor, package_name = MLJLinearModels, ... )
(name = RandomForestRegressor, package_name = DecisionTree, ... )
(name = RidgeRegressor, package_name = MLJLinearModels, ... )
(name = RidgeRegressor, package_name = MultivariateStats, ... )
(name = RobustRegressor, package_name = MLJLinearModels, ... ) Searching for an unsupervised model:
models(matching(X))23-element Array{NamedTuple{(:name, :package_name, :is_supervised, :docstring, :hyperparameter_ranges, :hyperparameter_types, :hyperparameters, :implemented_methods, :is_pure_julia, :is_wrapper, :load_path, :package_license, :package_url, :package_uuid, :prediction_type, :supports_online, :supports_weights, :input_scitype, :target_scitype, :output_scitype),T} where T<:Tuple,1}:
(name = AffinityPropagation, package_name = ScikitLearn, ... )
(name = AgglomerativeClustering, package_name = ScikitLearn, ... )
(name = Birch, package_name = ScikitLearn, ... )
(name = ContinuousEncoder, package_name = MLJModels, ... )
(name = DBSCAN, package_name = ScikitLearn, ... )
(name = FeatureAgglomeration, package_name = ScikitLearn, ... )
(name = FeatureSelector, package_name = MLJModels, ... )
(name = FillImputer, package_name = MLJModels, ... )
(name = ICA, package_name = MultivariateStats, ... )
(name = KMeans, package_name = Clustering, ... )
⋮
(name = MeanShift, package_name = ScikitLearn, ... )
(name = MiniBatchKMeans, package_name = ScikitLearn, ... )
(name = OPTICS, package_name = ScikitLearn, ... )
(name = OneClassSVM, package_name = LIBSVM, ... )
(name = OneHotEncoder, package_name = MLJModels, ... )
(name = PCA, package_name = MultivariateStats, ... )
(name = SpectralClustering, package_name = ScikitLearn, ... )
(name = Standardizer, package_name = MLJModels, ... )
(name = StaticTransformer, package_name = MLJBase, ... ) Getting the metadata entry for a given model type:
info("PCA")
info("RidgeRegressor", pkg="MultivariateStats") # a model type in multiple packagesRidge regressor with regularization parameter lambda. Learns a linear regression with a penalty on the l2 norm of the coefficients.
→ based on [MultivariateStats](https://github.com/JuliaStats/MultivariateStats.jl).
→ do `@load RidgeRegressor pkg="MultivariateStats"` to use the model.
→ do `?RidgeRegressor` for documentation.
(name = "RidgeRegressor",
package_name = "MultivariateStats",
is_supervised = true,
docstring = "Ridge regressor with regularization parameter lambda. Learns a linear regression with a penalty on the l2 norm of the coefficients.\n→ based on [MultivariateStats](https://github.com/JuliaStats/MultivariateStats.jl).\n→ do `@load RidgeRegressor pkg=\"MultivariateStats\"` to use the model.\n→ do `?RidgeRegressor` for documentation.",
hyperparameter_ranges = (nothing,),
hyperparameter_types = ("Real",),
hyperparameters = (:lambda,),
implemented_methods = Symbol[:clean!, :fit, :fitted_params, :predict],
is_pure_julia = true,
is_wrapper = false,
load_path = "MLJModels.MultivariateStats_.RidgeRegressor",
package_license = "MIT",
package_url = "https://github.com/JuliaStats/MultivariateStats.jl",
package_uuid = "6f286f6a-111f-5878-ab1e-185364afe411",
prediction_type = :deterministic,
supports_online = false,
supports_weights = false,
input_scitype = Table{_s23} where _s23<:(AbstractArray{_s25,1} where _s25<:Continuous),
target_scitype = AbstractArray{Continuous,1},
output_scitype = Unknown,)Instantiating a model
Reference: Getting Started
@load DecisionTreeClassifier
model = DecisionTreeClassifier(min_samples_split=5, max_depth=4)DecisionTreeClassifier(
max_depth = 4,
min_samples_leaf = 1,
min_samples_split = 5,
min_purity_increase = 0.0,
n_subfeatures = 0,
post_prune = false,
merge_purity_threshold = 1.0,
pdf_smoothing = 0.0,
display_depth = 5) @ 8…21or
model = @load DecisionTreeClassifier
model.min_samples_split = 5
model.max_depth = 4Evaluating a model
Reference: Evaluating Model Performance
X, y = @load_boston
model = @load KNNRegressor
evaluate(model, X, y, resampling=CV(nfolds=5), measure=[rms, mav])┌───────────┬───────────────┬───────────────────────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────┼───────────────┼───────────────────────────────┤
│ rms │ 8.77 │ [8.53, 8.8, 10.7, 9.43, 5.59] │
│ mae │ 6.02 │ [6.52, 5.7, 7.65, 6.09, 4.11] │
└───────────┴───────────────┴───────────────────────────────┘
_.per_observation = [missing, missing]
Basic fit/evaluate/predict by hand:
Reference: Getting Started, Machines, Evaluating Model Performance, Performance Measures
import RDatasets
vaso = RDatasets.dataset("robustbase", "vaso"); # a DataFrame
first(vaso, 3)| Volume | Rate | Y | |
|---|---|---|---|
| Float64 | Float64 | Int64 | |
| 1 | 3.7 | 0.825 | 1 |
| 2 | 3.5 | 1.09 | 1 |
| 3 | 1.25 | 2.5 | 1 |
y, X = unpack(vaso, ==(:Y), c -> true; :Y => Multiclass)
tree_model = @load DecisionTreeClassifier┌ Info: A model type "DecisionTreeClassifier" is already loaded.
└ No new code loaded.Bind the model and data together in a machine , which will additionally store the learned parameters (fitresults) when fit:
tree = machine(tree_model, X, y)Machine{DecisionTreeClassifier} @ 1…41
Split row indices into training and evaluation rows:
train, test = partition(eachindex(y), 0.7, shuffle=true, rng=1234); # 70:30 split([27, 28, 30, 31, 32, 18, 21, 9, 26, 14 … 7, 39, 2, 37, 1, 8, 19, 25, 35, 34], [22, 13, 11, 4, 10, 16, 3, 20, 29, 23, 12, 24])Fit on train and evaluate on test:
fit!(tree, rows=train)
yhat = predict(tree, rows=test);
mean(cross_entropy(yhat, y[test]))6.5216583816514975Predict on new data:
Xnew = (Volume=3*rand(3), Rate=3*rand(3))
predict(tree, Xnew) # a vector of distributions3-element MLJBase.UnivariateFiniteArray{Multiclass{2},Int64,UInt32,Float64,1}:
UnivariateFinite{Multiclass{2}}(0=>0.273, 1=>0.727)
UnivariateFinite{Multiclass{2}}(0=>0.273, 1=>0.727)
UnivariateFinite{Multiclass{2}}(0=>0.9, 1=>0.1) predict_mode(tree, Xnew) # a vector of point-predictions3-element CategoricalArray{Int64,1,UInt32}:
1
1
0More performance evaluation examples
import LossFunctions.ZeroOneLossEvaluating model + data directly:
evaluate(tree_model, X, y,
resampling=Holdout(fraction_train=0.7, shuffle=true, rng=1234),
measure=[cross_entropy, ZeroOneLoss()])┌───────────────┬───────────────┬────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────────┼───────────────┼────────────┤
│ cross_entropy │ 6.52 │ [6.52] │
│ ZeroOneLoss │ 0.417 │ [0.417] │
└───────────────┴───────────────┴────────────┘
_.per_observation = [[[0.105, 36.0, ..., 1.3]], [[0.0, 1.0, ..., 1.0]]]
If a machine is already defined, as above:
evaluate!(tree,
resampling=Holdout(fraction_train=0.7, shuffle=true, rng=1234),
measure=[cross_entropy, ZeroOneLoss()])┌───────────────┬───────────────┬────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────────┼───────────────┼────────────┤
│ cross_entropy │ 6.52 │ [6.52] │
│ ZeroOneLoss │ 0.417 │ [0.417] │
└───────────────┴───────────────┴────────────┘
_.per_observation = [[[0.105, 36.0, ..., 1.3]], [[0.0, 1.0, ..., 1.0]]]
Using cross-validation:
evaluate!(tree, resampling=CV(nfolds=5, shuffle=true, rng=1234),
measure=[cross_entropy, ZeroOneLoss()])┌───────────────┬───────────────┬──────────────────────────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────────┼───────────────┼──────────────────────────────────┤
│ cross_entropy │ 3.27 │ [9.25, 0.598, 4.93, 1.07, 0.523] │
│ ZeroOneLoss │ 0.436 │ [0.5, 0.375, 0.375, 0.5, 0.429] │
└───────────────┴───────────────┴──────────────────────────────────┘
_.per_observation = [[[2.22e-16, 0.944, ..., 2.22e-16], [0.847, 0.56, ..., 0.56], [0.799, 0.598, ..., 36.0], [2.01, 2.01, ..., 0.143], [0.847, 2.22e-16, ..., 0.56]], [[0.0, 1.0, ..., 0.0], [1.0, 0.0, ..., 0.0], [1.0, 0.0, ..., 1.0], [1.0, 1.0, ..., 0.0], [1.0, 0.0, ..., 0.0]]]
With user-specified train/test pairs of row indices:
f1, f2, f3 = 1:13, 14:26, 27:36
pairs = [(f1, vcat(f2, f3)), (f2, vcat(f3, f1)), (f3, vcat(f1, f2))];
evaluate!(tree,
resampling=pairs,
measure=[cross_entropy, ZeroOneLoss()])┌───────────────┬───────────────┬───────────────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────────┼───────────────┼───────────────────────┤
│ cross_entropy │ 5.88 │ [2.16, 11.0, 4.51] │
│ ZeroOneLoss │ 0.241 │ [0.304, 0.304, 0.115] │
└───────────────┴───────────────┴───────────────────────┘
_.per_observation = [[[0.154, 0.154, ..., 0.154], [2.22e-16, 36.0, ..., 2.22e-16], [2.22e-16, 2.22e-16, ..., 0.693]], [[0.0, 0.0, ..., 0.0], [0.0, 1.0, ..., 0.0], [0.0, 0.0, ..., 0.0]]]
Changing a hyperparameter and re-evaluating:
tree_model.max_depth = 3
evaluate!(tree,
resampling=CV(nfolds=5, shuffle=true, rng=1234),
measure=[cross_entropy, ZeroOneLoss()])┌───────────────┬───────────────┬────────────────────────────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────────┼───────────────┼────────────────────────────────────┤
│ cross_entropy │ 2.25 │ [9.18, 0.484, 0.427, 0.564, 0.624] │
│ ZeroOneLoss │ 0.336 │ [0.375, 0.25, 0.25, 0.375, 0.429] │
└───────────────┴───────────────┴────────────────────────────────────┘
_.per_observation = [[[2.22e-16, 1.32, ..., 2.22e-16], [2.22e-16, 0.318, ..., 0.318], [0.405, 2.22e-16, ..., 2.22e-16], [1.5, 1.5, ..., 2.22e-16], [1.22, 2.22e-16, ..., 0.348]], [[0.0, 1.0, ..., 0.0], [0.0, 0.0, ..., 0.0], [0.0, 0.0, ..., 0.0], [1.0, 1.0, ..., 0.0], [1.0, 0.0, ..., 0.0]]]
Inspecting training results
Fit a ordinary least square model to some synthetic data:
x1 = rand(100)
x2 = rand(100)
X = (x1=x1, x2=x2)
y = x1 - 2x2 + 0.1*rand(100);
ols_model = @load LinearRegressor pkg=GLM
ols = machine(ols_model, X, y)
fit!(ols)Machine{LinearRegressor} @ 8…46
Get a named tuple representing the learned parameters, human-readable if appropriate:
fitted_params(ols)(coef = [0.9980408785681985, -2.0187847643414165],
intercept = 0.06254921987034147,)Get other training-related information:
report(ols)(deviance = 0.06678466049975317,
dof_residual = 97.0,
stderror = [0.011055933168078569, 0.009722928721534558, 0.007964653030961171],
vcov = [0.0001222336582170198 -5.867030307356935e-6 -6.147715045609078e-5; -5.867030307356935e-6 9.453534292404165e-5 -4.620004413929122e-5; -6.147715045609078e-5 -4.620004413929122e-5 6.343569790359898e-5],)Basic fit/transform for unsupervised models
Load data:
X, y = @load_iris
train, test = partition(eachindex(y), 0.97, shuffle=true, rng=123)([125, 100, 130, 9, 70, 148, 39, 64, 6, 107 … 110, 59, 139, 21, 112, 144, 140, 72, 109, 41], [106, 147, 47, 5])Instantiate and fit the model/machine:
@load PCA
pca_model = PCA(maxoutdim=2)
pca = machine(pca_model, X)
fit!(pca, rows=train)Machine{PCA} @ 1…11
Transform selected data bound to the machine:
transform(pca, rows=test);(x1 = [-3.3942826854483243, -1.5219827578765068, 2.538247455185219, 2.7299639893931373],
x2 = [0.5472450223745241, -0.36842368617126214, 0.5199299511335698, 0.3448466122232363],)Transform new data:
Xnew = (sepal_length=rand(3), sepal_width=rand(3),
petal_length=rand(3), petal_width=rand(3));
transform(pca, Xnew)(x1 = [4.730892947224812, 4.847578025235597, 4.1195082470482385],
x2 = [-4.397069726165956, -4.933272189569572, -4.726867850491834],)Inverting learned transformations
y = rand(100);
stand_model = UnivariateStandardizer()
stand = machine(stand_model, y)
fit!(stand)
z = transform(stand, y);
@assert inverse_transform(stand, z) ≈ y # true[ Info: Training Machine{UnivariateStandardizer} @ 9…12.Nested hyperparameter tuning
Reference: Tuning Models
Define a model with nested hyperparameters:
tree_model = @load DecisionTreeClassifier
forest_model = EnsembleModel(atom=tree_model, n=300)ProbabilisticEnsembleModel(
atom = DecisionTreeClassifier(
max_depth = -1,
min_samples_leaf = 1,
min_samples_split = 2,
min_purity_increase = 0.0,
n_subfeatures = 0,
post_prune = false,
merge_purity_threshold = 1.0,
pdf_smoothing = 0.0,
display_depth = 5),
atomic_weights = Float64[],
bagging_fraction = 0.8,
rng = MersenneTwister(UInt32[0x5e195684, 0x67952e4a, 0x3888593c, 0x4fe704ab]) @ 22,
n = 300,
acceleration = CPU1{Nothing}(nothing),
out_of_bag_measure = Any[]) @ 6…88Inspect all hyperparameters, even nested ones (returns nested named tuple):
params(forest_model)(atom = (max_depth = -1,
min_samples_leaf = 1,
min_samples_split = 2,
min_purity_increase = 0.0,
n_subfeatures = 0,
post_prune = false,
merge_purity_threshold = 1.0,
pdf_smoothing = 0.0,
display_depth = 5,),
atomic_weights = Float64[],
bagging_fraction = 0.8,
rng = MersenneTwister(UInt32[0x5e195684, 0x67952e4a, 0x3888593c, 0x4fe704ab]) @ 22,
n = 300,
acceleration = CPU1{Nothing}(nothing),
out_of_bag_measure = Any[],)Define ranges for hyperparameters to be tuned:
r1 = range(forest_model, :bagging_fraction, lower=0.5, upper=1.0, scale=:log10)MLJBase.NumericRange(Float64, :bagging_fraction, ... )r2 = range(forest_model, :(atom.n_subfeatures), lower=1, upper=4) # nestedMLJBase.NumericRange(Int64, :(atom.n_subfeatures), ... )Wrap the model in a tuning strategy:
tuned_forest = TunedModel(model=forest_model,
tuning=Grid(resolution=12),
resampling=CV(nfolds=6),
ranges=[r1, r2],
measure=cross_entropy)ProbabilisticTunedModel(
model = ProbabilisticEnsembleModel(
atom = DecisionTreeClassifier @ 1…38,
atomic_weights = Float64[],
bagging_fraction = 0.8,
rng = MersenneTwister(UInt32[0x5e195684, 0x67952e4a, 0x3888593c, 0x4fe704ab]) @ 22,
n = 300,
acceleration = CPU1{Nothing}(nothing),
out_of_bag_measure = Any[]),
tuning = Grid(
goal = nothing,
resolution = 12,
shuffle = true,
rng = MersenneTwister(UInt32[0x5e195684, 0x67952e4a, 0x3888593c, 0x4fe704ab]) @ 22),
resampling = CV(
nfolds = 6,
shuffle = false,
rng = MersenneTwister(UInt32[0x5e195684, 0x67952e4a, 0x3888593c, 0x4fe704ab]) @ 22),
measure = cross_entropy(
eps = 2.220446049250313e-16),
weights = nothing,
operation = MLJModelInterface.predict,
range = MLJBase.NumericRange{T,MLJBase.Bounded,Symbol} where T[NumericRange{Float64,…} @ 1…23, NumericRange{Int64,…} @ 1…54],
train_best = true,
repeats = 1,
n = nothing,
acceleration = CPU1{Nothing}(nothing),
acceleration_resampling = CPU1{Nothing}(nothing),
check_measure = true) @ 1…23Bound the wrapped model to data:
tuned = machine(tuned_forest, X, y)Machine{ProbabilisticTunedModel{Grid,…}} @ 4…29
Fitting the resultant machine optimizes the hyperparameters specified in range, using the specified tuning and resampling strategies and performance measure (possibly a vector of measures), and retrains on all data bound to the machine:
fit!(tuned)Machine{ProbabilisticTunedModel{Grid,…}} @ 4…29
Inspecting the optimal model:
F = fitted_params(tuned)(best_model = ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 4…23,
best_fitted_params = (fitresult = WrappedEnsemble{Tuple{Node{Float64,…},…},…} @ 1…06,),)F.best_modelProbabilisticEnsembleModel(
atom = DecisionTreeClassifier(
max_depth = -1,
min_samples_leaf = 1,
min_samples_split = 2,
min_purity_increase = 0.0,
n_subfeatures = 3,
post_prune = false,
merge_purity_threshold = 1.0,
pdf_smoothing = 0.0,
display_depth = 5),
atomic_weights = Float64[],
bagging_fraction = 0.5,
rng = MersenneTwister(UInt32[0x5e195684, 0x67952e4a, 0x3888593c, 0x4fe704ab]) @ 345,
n = 300,
acceleration = CPU1{Nothing}(nothing),
out_of_bag_measure = Any[]) @ 4…23Inspecting details of tuning procedure:
report(tuned)(best_model = ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 4…23,
best_result = (measure = MLJBase.CrossEntropy{Float64}[cross_entropy],
measurement = [0.15391166178950372],),
best_report = (measures = Any[],
oob_measurements = missing,),
history = Tuple{MLJ.ProbabilisticEnsembleModel{MLJModels.DecisionTree_.DecisionTreeClassifier},NamedTuple{(:measure, :measurement),Tuple{Array{MLJBase.CrossEntropy{Float64},1},Array{Float64,1}}}}[(ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 5…80, (measure = [cross_entropy], measurement = [0.1614254974088198])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 9…40, (measure = [cross_entropy], measurement = [0.6581192147910828])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 9…40, (measure = [cross_entropy], measurement = [0.20358546901262772])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 3…81, (measure = [cross_entropy], measurement = [0.40821706512340117])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 1…20, (measure = [cross_entropy], measurement = [0.42995473342934903])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 9…85, (measure = [cross_entropy], measurement = [0.6380960415648973])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 3…95, (measure = [cross_entropy], measurement = [0.18240114746418012])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 2…10, (measure = [cross_entropy], measurement = [0.15405897356015485])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 9…99, (measure = [cross_entropy], measurement = [0.22095526228263362])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 1…10, (measure = [cross_entropy], measurement = [0.18487375870126552])) … (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 7…71, (measure = [cross_entropy], measurement = [0.17203022785502062])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 3…51, (measure = [cross_entropy], measurement = [0.19306762894241777])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 4…65, (measure = [cross_entropy], measurement = [0.18656758881481814])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 2…14, (measure = [cross_entropy], measurement = [0.6653155632749154])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 1…41, (measure = [cross_entropy], measurement = [0.43404067957200704])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 6…83, (measure = [cross_entropy], measurement = [0.20951394629936668])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 6…25, (measure = [cross_entropy], measurement = [0.20055887882148096])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 1…25, (measure = [cross_entropy], measurement = [0.20273888234527462])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 3…26, (measure = [cross_entropy], measurement = [0.17050616206171923])), (ProbabilisticEnsembleModel{DecisionTreeClassifier} @ 9…00, (measure = [cross_entropy], measurement = [0.2062870732605244]))],
plotting = (parameter_names = ["bagging_fraction", "atom.n_subfeatures"],
parameter_scales = Symbol[:log10, :linear],
parameter_values = Any[0.5671562610977313 3; 0.8815912549960212 4; … ; 0.6433324490047159 3; 0.8277532798848107 2],
measurements = [0.1614254974088198, 0.6581192147910828, 0.20358546901262772, 0.40821706512340117, 0.42995473342934903, 0.6380960415648973, 0.18240114746418012, 0.15405897356015485, 0.22095526228263362, 0.18487375870126552 … 0.17203022785502062, 0.19306762894241777, 0.18656758881481814, 0.6653155632749154, 0.43404067957200704, 0.20951394629936668, 0.20055887882148096, 0.20273888234527462, 0.17050616206171923, 0.2062870732605244],),)Visualizing these results:
using Plots
plot(tuned)
Predicting on new data using the optimized model:
predict(tuned, Xnew)3-element Array{UnivariateFinite{Multiclass{3},String,UInt32,Float64},1}:
UnivariateFinite{Multiclass{3}}(versicolor=>0.0, virginica=>0.0, setosa=>1.0)
UnivariateFinite{Multiclass{3}}(versicolor=>0.0, virginica=>0.0, setosa=>1.0)
UnivariateFinite{Multiclass{3}}(versicolor=>0.267, virginica=>0.00667, setosa=>0.727)Constructing a linear pipeline
Reference: Composing Models
Constructing a linear (unbranching) pipeline with a learned target transformation/inverse transformation:
X, y = @load_reduced_ames
@load KNNRegressor
pipe = @pipeline MyPipe(X -> coerce(X, :age=>Continuous),
hot = OneHotEncoder(),
knn = KNNRegressor(K=3),
target = UnivariateStandardizer())MyPipe(
hot = OneHotEncoder(
features = Symbol[],
drop_last = false,
ordered_factor = true,
ignore = false),
knn = KNNRegressor(
K = 3,
algorithm = :kdtree,
metric = Distances.Euclidean(0.0),
leafsize = 10,
reorder = true,
weights = :uniform),
target = UnivariateStandardizer()) @ 1…83Evaluating the pipeline (just as you would any other model):
pipe.knn.K = 2
pipe.hot.drop_last = true
evaluate(pipe, X, y, resampling=Holdout(), measure=rms, verbosity=2)┌───────────┬───────────────┬────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────┼───────────────┼────────────┤
│ rms │ 53100.0 │ [53100.0] │
└───────────┴───────────────┴────────────┘
_.per_observation = [missing]
Inspecting the learned parameters in a pipeline:
mach = machine(pipe, X, y) |> fit!
F = fitted_params(mach)
F.machines3-element Array{Any,1}:
NodalMachine{UnivariateStandardizer} @ 5…67
NodalMachine{KNNRegressor} @ 1…43
NodalMachine{OneHotEncoder} @ 4…23 F.fitted_params_given_machineOrderedCollections.LittleDict{Any,Any,Array{Any,1},Array{Any,1}} with 3 entries:
NodalMachine{UnivariateS… => (fitresult = (1.80151e5, 76696.6),)
NodalMachine{KNNRegresso… => (tree = KDTree{SArray{Tuple{56},Float64,1,56},Eu…
NodalMachine{OneHotEncod… => (fitresult = OneHotEncoderResult @ 9…21,)F.fitted_params_given_machine[F.machines[2]](tree = NearestNeighbors.KDTree{StaticArrays.SArray{Tuple{56},Float64,1,56},Distances.Euclidean,Float64}
Number of points: 1456
Dimensions: 56
Metric: Distances.Euclidean(0.0)
Reordered: true,)Constructing a linear (unbranching) pipeline with a static (unlearned) target transformation/inverse transformation:
@load DecisionTreeRegressor
pipe2 = @pipeline MyPipe2(X -> coerce(X, :age=>Continuous),
hot = OneHotEncoder(),
tree = DecisionTreeRegressor(max_depth=4),
target = y -> log.(y),
inverse = z -> exp.(z))MyPipe2(
hot = OneHotEncoder(
features = Symbol[],
drop_last = false,
ordered_factor = true,
ignore = false),
tree = DecisionTreeRegressor(
max_depth = 4,
min_samples_leaf = 5,
min_samples_split = 2,
min_purity_increase = 0.0,
n_subfeatures = 0,
post_prune = false,
merge_purity_threshold = 1.0),
target = StaticTransformer(
f = getfield(Main.ex-workflows, Symbol("##24#25"))()),
inverse = StaticTransformer(
f = getfield(Main.ex-workflows, Symbol("##26#27"))())) @ 1…65Creating a homogeneous ensemble of models
Reference: Homogeneous Ensembles
X, y = @load_iris
tree_model = @load DecisionTreeClassifier
forest_model = EnsembleModel(atom=tree_model, bagging_fraction=0.8, n=300)
forest = machine(forest_model, X, y)
evaluate!(forest, measure=cross_entropy)┌───────────────┬───────────────┬────────────────────────────────────────────────┐
│ _.measure │ _.measurement │ _.per_fold │
├───────────────┼───────────────┼────────────────────────────────────────────────┤
│ cross_entropy │ 0.63 │ [3.66e-15, 3.66e-15, 0.297, 1.63, 1.54, 0.313] │
└───────────────┴───────────────┴────────────────────────────────────────────────┘
_.per_observation = [[[3.66e-15, 3.66e-15, ..., 3.66e-15], [3.66e-15, 3.66e-15, ..., 3.66e-15], [0.027, 0.0101, ..., 3.66e-15], [3.66e-15, 0.227, ..., 3.66e-15], [3.66e-15, 0.0202, ..., 3.66e-15], [0.0305, 0.452, ..., 0.0408]]]
Performance curves
Generate a plot of performance, as a function of some hyperparameter (building on the preceding example)
Single performance curve:
r = range(forest_model, :n, lower=1, upper=1000, scale=:log10)
curve = learning_curve(forest,
range=r,
resampling=Holdout(),
resolution=50,
measure=cross_entropy,
verbosity=0)(parameter_name = "n",
parameter_scale = :log10,
parameter_values = [1, 2, 3, 4, 5, 6, 7, 8, 10, 11 … 281, 324, 373, 429, 494, 569, 655, 754, 869, 1000],
measurements = [9.611640903764574, 9.611640903764574, 8.058527965966839, 8.040507294495367, 8.050424785664886, 8.040507294495367, 8.047358435821021, 8.053293164382112, 7.304753845558118, 7.301664632462653 … 1.3490412510926062, 1.344243831913352, 1.3455844761131976, 1.3490799998329852, 1.3451569086959647, 1.3391561259570337, 1.3350158382670025, 1.3274872540870812, 1.324321272240844, 1.3175488881672346],)using Plots
plot(curve.parameter_values, curve.measurements, xlab=curve.parameter_name, xscale=curve.parameter_scale)
Multiple curves:
curve = learning_curve(forest,
range=r,
resampling=Holdout(),
measure=cross_entropy,
resolution=50,
rng_name=:rng,
rngs=4,
verbosity=0)(parameter_name = "n",
parameter_scale = :log10,
parameter_values = [1, 2, 3, 4, 5, 6, 7, 8, 10, 11 … 281, 324, 373, 429, 494, 569, 655, 754, 869, 1000],
measurements = [8.009700753137146 4.004850376568572 15.218431430960575 4.004850376568572; 8.009700753137146 4.004850376568572 15.218431430960575 4.004850376568572; … ; 1.185609260474703 1.2092991704940692 1.2501234167567907 1.2450007539076768; 1.1897175096819221 1.2129079412305683 1.2522134665490254 1.245232940800871],)plot(curve.parameter_values, curve.measurements,
xlab=curve.parameter_name, xscale=curve.parameter_scale)