TunedModel
tuned_model = TunedModel(; model=<model to be mutated>,
tuning=RandomSearch(),
resampling=Holdout(),
range=nothing,
measure=nothing,
n=default_n(tuning, range),
operation=nothing,
other_options...)
Construct a model wrapper for hyper-parameter optimization of a supervised learner, specifying the tuning
strategy and model
whose hyper-parameters are to be mutated.
tuned_model = TunedModel(; models=<models to be compared>,
resampling=Holdout(),
measure=nothing,
n=length(models),
operation=nothing,
other_options...)
Construct a wrapper for multiple models
, for selection of an optimal one (equivalent to specifying tuning=Explicit()
and range=models
above). Elements of the iterator models
need not have a common type, but they must all be Deterministic
or all be Probabilistic
and this is not checked but inferred from the first element generated.
See below for a complete list of options.
Training
Calling fit!(mach)
on a machine mach=machine(tuned_model, X, y)
or mach=machine(tuned_model, X, y, w)
will:
- Instigate a search, over clones of
model
, with the hyperparameter mutations specified byrange
, for a model optimizing the specifiedmeasure
, using performance evaluations carried out using the specifiedtuning
strategy andresampling
strategy. In the casemodels
is explictly listed, the search is instead over the models generated by the iteratormodels
. - Fit an internal machine, based on the optimal model
fitted_params(mach).best_model
, wrapping the optimalmodel
object in all the provided dataX
,y
(,w
). Callingpredict(mach, Xnew)
then returns predictions onXnew
of this internal machine. The final train can be supressed by settingtrain_best=false
.
Search space
The range
objects supported depend on the tuning
strategy specified. Query the strategy
docstring for details. To optimize over an explicit list v
of models of the same type, use strategy=Explicit()
and specify model=v[1]
and range=v
.
The number of models searched is specified by n
. If unspecified, then MLJTuning.default_n(tuning, range)
is used. When n
is increased and fit!(mach)
called again, the old search history is re-instated and the search continues where it left off.
Measures (metrics)
If more than one measure
is specified, then only the first is optimized (unless strategy
is multi-objective) but the performance against every measure specified will be computed and reported in report(mach).best_performance
and other relevant attributes of the generated report. Options exist to pass per-observation weights or class weights to measures; see below.
Important. If a custom measure, my_measure
is used, and the measure is a score, rather than a loss, be sure to check that MLJ.orientation(my_measure) == :score
to ensure maximization of the measure, rather than minimization. Override an incorrect value with MLJ.orientation(::typeof(my_measure)) = :score
.
Accessing the fitted parameters and other training (tuning) outcomes
A Plots.jl plot of performance estimates is returned by plot(mach)
or heatmap(mach)
.
Once a tuning machine mach
has bee trained as above, then fitted_params(mach)
has these keys/values:
key | value |
---|---|
best_model | optimal model instance |
best_fitted_params | learned parameters of the optimal model |
The named tuple report(mach)
includes these keys/values:
key | value |
---|---|
best_model | optimal model instance |
best_history_entry | corresponding entry in the history, including performance estimate |
best_report | report generated by fitting the optimal model to all data |
history | tuning strategy-specific history of all evaluations |
plus other key/value pairs specific to the tuning
strategy.
Each element of history
is a property-accessible object with these properties:
key | value |
---|---|
measure | vector of measures (metrics) |
measurement | vector of measurements, one per measure |
per_fold | vector of vectors of unaggregated per-fold measurements |
evaluation | full PerformanceEvaluation /CompactPerformaceEvaluation object |
Complete list of key-word options
model
:Supervised
model prototype that is cloned and mutated to generate models for evaluationmodels
: Alternatively, an iterator of MLJ models to be explicitly evaluated. These may have varying types.tuning=RandomSearch()
: tuning strategy to be applied (eg,Grid()
). See the Tuning Models section of the MLJ manual for a complete list of options.resampling=Holdout()
: resampling strategy (eg,Holdout()
,CV()
),StratifiedCV()
) to be applied in performance evaluationsmeasure
: measure or measures to be applied in performance evaluations; only the first used in optimization (unless the strategy is multi-objective) but all reported to the historyweights
: per-observation weights to be passed the measure(s) in performance evaluations, where supported. Check support withsupports_weights(measure)
.class_weights
: class weights to be passed the measure(s) in performance evaluations, where supported. Check support withsupports_class_weights(measure)
.repeats=1
: for generating train/test sets multiple times in resampling ("Monte Carlo" resampling); seeevaluate!
for detailsoperation
/operations
- One ofpredict
,predict_mean
,predict_mode
,predict_median
, orpredict_joint
, or a vector of these of the same length asmeasure
/measures
. Automatically inferred if left unspecified.range
: range object; tuning strategy documentation describes supported typesselection_heuristic
: the rule determining how the best model is decided. According to the default heuristic,NaiveSelection()
,measure
(or the first element ofmeasure
) is evaluated for each resample and these per-fold measurements are aggregrated. The model with the lowest (resp. highest) aggregate is chosen if the measure is a:loss
(resp. a:score
).n
: number of iterations (ie, models to be evaluated); set by tuning strategy if left unspecifiedtrain_best=true
: whether to train the optimal modelacceleration=default_resource()
: mode of parallelization for tuning strategies that support thisacceleration_resampling=CPU1()
: mode of parallelization for resamplingcheck_measure=true
: whether to checkmeasure
is compatible with the specifiedmodel
andoperation
)cache=true
: whether to cache model-specific representations of user-suplied data; set tofalse
to conserve memory. Speed gains likely limited to the caseresampling isa Holdout
.compact_history=true
: whether to writeCompactPerformanceEvaluation
](@ref) or regularPerformanceEvaluation
objects to the history (accessed via the:evaluation
key); the compact form excludes some fields to conserve memory.logger=default_logger()
: a logger for externally reporting model performance evaluations, such as anMLJFlow.Logger
instance. On startup,default_logger()=nothing
; usedefault_logger(logger)
to set a global logger.