DecisionTreeRegressor

mutable struct DecisionTreeRegressor <: MLJModelInterface.Deterministic

A simple Decision Tree model for regression with support for Missing data, from the Beta Machine Learning Toolkit (BetaML).

Hyperparameters:

  • max_depth::Int64: The maximum depth the tree is allowed to reach. When this is reached the node is forced to become a leaf [def: 0, i.e. no limits]
  • min_gain::Float64: The minimum information gain to allow for a node's partition [def: 0]
  • min_records::Int64: The minimum number of records a node must holds to consider for a partition of it [def: 2]
  • max_features::Int64: The maximum number of (random) features to consider at each partitioning [def: 0, i.e. look at all features]
  • splitting_criterion::Function: This is the name of the function to be used to compute the information gain of a specific partition. This is done by measuring the difference betwwen the "impurity" of the labels of the parent node with those of the two child nodes, weighted by the respective number of items. [def: variance]. Either variance or a custom function. It can also be an anonymous function.
  • rng::Random.AbstractRNG: A Random Number Generator to be used in stochastic parts of the code [deafult: Random.GLOBAL_RNG]

Example:

julia> using MLJ

julia> X, y        = @load_boston;

julia> modelType   = @load DecisionTreeRegressor pkg = "BetaML" verbosity=0
BetaML.Trees.DecisionTreeRegressor

julia> model       = modelType()
DecisionTreeRegressor(
  max_depth = 0, 
  min_gain = 0.0, 
  min_records = 2, 
  max_features = 0, 
  splitting_criterion = BetaML.Utils.variance, 
  rng = Random._GLOBAL_RNG())

julia> mach        = machine(model, X, y);

julia> fit!(mach);
[ Info: Training machine(DecisionTreeRegressor(max_depth = 0, …), …).

julia> ŷ           = predict(mach, X);

julia> hcat(y,ŷ)
506×2 Matrix{Float64}:
 24.0  26.35
 21.6  21.6
 34.7  34.8
  ⋮    
 23.9  23.75
 22.0  22.2
 11.9  13.2