minimize
minimize(model) -> <smaller version of model suitable for serialization>
Typical workflow
model = fit(algorithm, X, y)
ŷ = predict(model, LiteralTarget(), Xnew)
LearnAPI.feature_importances(model)
small_model = minimize(model)
serialize("my_model.jls", small_model)
recovered_model = deserialize("my_random_forest.jls")
@assert predict(recovered_model, LiteralTarget(), Xnew) == ŷ
# throws MethodError:
LearnAPI.feature_importances(recovered_model)
Implementation guide
method | compulsory? | fallback | requires |
---|---|---|---|
minimize | no | identity | fit |
Reference
LearnAPI.minimize
— Functionminimize(model; options...)
Return a version of model
that will generally have a smaller memory allocation than model
, suitable for serialization. Here model
is any object returned by fit
. Accessor functions that can be called on model
may not work on minimize(model)
, but predict
, transform
and inverse_transform
will work, if implemented for model
. Check LearnAPI.functions(LearnAPI.algorithm(model))
to view see what the original model
implements.
Specific algorithms may provide keyword options
to control how much of the original functionality is preserved by minimize
.
Extended help
New implementations
Overloading minimize
for new algorithms is optional. The fallback is the identity. If overloaded, you must include minimize
in the tuple returned by the LearnAPI.functions
trait.
New implementations must enforce the following identities, whenever the right-hand side is defined:
predict(minimize(model; options...), args...; kwargs...) ==
predict(model, args...; kwargs...)
transform(minimize(model; options...), args...; kwargs...) ==
transform(model, args...; kwargs...)
inverse_transform(minimize(model; options), args...; kwargs...) ==
inverse_transform(model, args...; kwargs...)
Additionally:
minimize(minimize(model)) == minimize(model)