- refactor: Only pass
extra
to$assign_result()
.
- feat: Add new callback
clbk("mlr3tuning.one_se_rule")
that selects the the hyperparameter configuration with the smallest feature set within one standard error of the best. - feat: Add new stages
on_tuning_result_begin
andon_result_begin
toCallbackAsyncTuning
andCallbackBatchTuning
. - refactor: Rename stage
on_result
toon_result_end
inCallbackAsyncTuning
andCallbackBatchTuning
. - docs: Extend the
CallbackAsyncTuning
andCallbackBatchTuning
documentation. - compatibility: mlr3 0.22.0
- fix: The
as_data_table()
functions do not unnest thex_domain
colum anymore by default. - fix:
to_tune(internal = TRUE)
now also works if non-internal tuning parameters require have an.extra_trafo
. - feat: It is now possible to pass an
internal_search_space
manually. This allows to use parameter transformations on the primary search space in combination with internal hyperparameter tuning. - refactor: The
Tuner
pass extra information of the result in theextra
parameter now.
- refactor: Extract internal tuned values in instance.
- refactor: Replace internal tuning callback.
- perf: Delete intermediate
BenchmarkResult
inObjectiveTuningBatch
after optimization.
- feat: Introduce asynchronous optimization with the
TunerAsync
andTuningInstanceAsync*
classes. - BREAKING CHANGE: The
Tuner
class isTunerBatch
now. - BREAKING CHANGE: THe
TuningInstanceSingleCrit
andTuningInstanceMultiCrit
classes areTuningInstanceBatchSingleCrit
andTuningInstanceBatchMultiCrit
now. - BREAKING CHANGE: The
CallbackTuning
class isCallbackBatchTuning
now. - BREAKING CHANGE: The
ContextEval
class isContextBatchTuning
now. - refactor: Remove hotstarting from batch optimization due to low performance.
- refactor: The option
evaluate_default
is a callback now.
- compatibility: Work with new paradox version 1.0.0
- fix:
TunerIrace
failed with logical parameters and dependencies. - Added marshaling support to
AutoTuner
- refactor: Change thread limits.
- refactor: Speed up the tuning process by minimizing the number of deep clones and parameter checks.
- fix: Set
store_benchmark_result = TRUE
ifstore_models = TRUE
when creating a tuning instance. - fix: Passing a terminator in
tune_nested()
did not work.
- fix: Add
$phash()
method toAutoTuner
. - fix: Include
Tuner
in hash ofAutoTuner
. - feat: Add new callback that scores the configurations on additional measures while tuning.
- feat: Add vignette about adding new tuners which was previously part of the mlr3book.
- BREAKING CHANGE: The
method
parameter oftune()
,tune_nested()
andauto_tuner()
is renamed totuner
. OnlyTuner
objects are accepted now. Arguments to the tuner cannot be passed with...
anymore. - BREAKING CHANGE: The
tuner
parameter ofAutoTuner
is moved to the first position to achieve consistency with the other functions. - docs: Update resources sections.
- docs: Add list of default measures.
- fix: Add
allow_hotstarting
,keep_hotstart_stack
andkeep_models
flags toAutoTuner
andauto_tuner()
.
- feat:
AutoTuner
accepts instantiated resamplings now. TheAutoTuner
checks if all row ids of the inner resampling are present in the outer resampling train set when nested resampling is performed. - fix: Standalone
Tuner
did not create aContextOptimization
.
- fix: The
ti()
function did not accept callbacks.
- feat: The methods
$importance()
,$selected_features()
,$oob_error()
and$loglik()
are forwarded from the final model to theAutoTuner
now. - refactor: The
AutoTuner
stores the instance and benchmark result ifstore_models = TRUE
. - refactor: The
AutoTuner
stores the instance ifstore_benchmark_result = TRUE
.
- feat: Add new callback that enables early stopping while tuning to
mlr_callbacks
. - feat: Add new callback that backups the benchmark result to disk after each batch.
- feat: Create custom callbacks with the
callback_batch_tuning()
function.
- fix:
AutoTuner
did not acceptTuningSpace
objects as search spaces. - feat: Add
ti()
function to create aTuningInstanceSingleCrit
orTuningInstanceMultiCrit
. - docs: Documentation has a technical details section now.
- feat: New option for
extract_inner_tuning_results()
to return the tuning instances.
- feat: Add option
evaluate_default
to evaluate learners with hyperparameters set to their default values. - refactor: From now on, the default of
smooth
isFALSE
forTunerGenSA
.
- feat:
Tuner
objects have the field$id
now.
- feat: Allow to pass
Tuner
objects asmethod
intune()
andauto_tuner()
. - docs: Link
Tuner
to help page ofbbotk::Optimizer
. - feat:
Tuner
objects have the optional field$label
now. - feat:
as.data.table()
functions for objects of classDictionary
have been extended with additional columns.
- feat: Add a
as.data.table.DictionaryTuner
function. - feat: New
$help()
method which opens the manual page of aTuner
.
- feat:
as_search_space()
function to create search spaces fromLearner
andParamSet
objects. Allow to passTuningSpace
objects assearch_space
inTuningInstanceSingleCrit
andTuningInstanceMultiCrit
. - feat: The
mlr3::HotstartStack
can now be removed after tuning with thekeep_hotstart_stack
flag. - feat: The
Archive
stores errors and warnings of the learners. - feat: When no measure is provided, the default measure is used in
auto_tuner()
andtune_nested()
.
- fix:
$assign_result()
method inTuningInstanceSingleCrit
when search space is empty. - feat: Default measure is used when no measure is supplied to
TuningInstanceSingleCrit
.
- Fixes bug in
TuningInstanceMultiCrit$assign_result()
. - Hotstarting of learners with previously fitted models.
- Remove deep clones to speed up tuning.
- Add
store_models
flag toauto_tuner()
. - Add
"noisy"
property toObjectiveTuning
.
- Adds
AutoTuner$base_learner()
method to extract the base learner from nested learner objects. tune()
supports multi-criteria tuning.- Allows empty search space.
- Adds
TunerIrace
fromirace
package. extract_inner_tuning_archives()
helper function to extract inner tuning archives.- Removes
ArchiveTuning$extended_archive()
method. Themlr3::ResampleResults
are joined automatically byas.data.table.TuningArchive()
andextract_inner_tuning_archives()
.
- Adds
tune()
,auto_tuner()
andtune_nested()
sugar functions. TuningInstanceSingleCrit
,TuningInstanceMultiCrit
andAutoTuner
can be initialized withstore_benchmark_result = FALSE
andstore_models = TRUE
to allow measures to access the models.- Prettier printing methods.
- Fix
TuningInstance*$assign_result()
errors with required parameter bug. - Shortcuts to access
$learner()
,$learners()
,$learner_param_vals()
,$predictions()
and$resample_result()
from benchmark result in archive. extract_inner_tuning_results()
helper function to extract inner tuning results.
ArchiveTuning$data
is a public field now.
- Adds
TunerCmaes
fromadagio
package. - Fix
predict_type
inAutoTuner
. - Support to set
TuneToken
inLearner$param_set
and create a search space from it. - The order of the parameters in
TuningInstanceSingleCrit
andTuningInstanceSingleCrit
changed.
- Option to control
store_benchmark_result
,store_models
andcheck_values
inAutoTuner
.store_tuning_instance
must be set as a parameter during initialization. - Fixes
check_values
flag inTuningInstanceSingleCrit
andTuningInstanceMultiCrit
. - Removed dependency on orphaned package
bibtex
.
- Compact in-memory representation of R6 objects to save space when
saving mlr3 objects via
saveRDS()
,serialize()
etc. Archive
isArchiveTuning
now which stores the benchmark result in$benchmark_result
. This change removed the resample results from the archive but they can be still accessed via the benchmark result.- Warning message if external package for tuning is not installed.
- To retrieve the inner tuning results in nested resampling,
as.data.table(rr)$learner[[1]]$tuning_result
must be used now.
TuningInstance
is nowTuningInstanceSingleCrit
.TuningInstanceMultiCrit
is still available for multi-criteria tuning.- Terminators are now accessible by
trm()
andtrms()
instead ofterm()
andterms()
. - Storing of resample results is optional now by using the
store_resample_result
flag inTuningInstanceSingleCrit
andTuningInstanceMultiCrit
TunerNLoptr
adds non-linear optimization from the nloptr package.- Logging is controlled by the
bbotk
logger now. - Proposed points and performance values can be checked for validity by
activating the
check_values
flag inTuningInstanceSingleCrit
andTuningInstanceMultiCrit
.
- mlr3tuning now depends on the
bbotk
package for basic tuning objects.Terminator
classes now live inbbotk
. As a consequenceObjectiveTuning
inherits frombbotk::Objective
,TuningInstance
frombbotk::OptimInstance
andTuner
frombbotk::Optimizer
TuningInstance$param_set
becomesTuningInstance$search_space
to avoid confusion as theparam_set
usually contains the parameters that change the behavior of an object.- Tuning is triggered by
$optimize()
instead of$tune()
- Fixed a bug in
AutoTuner
where a$clone()
was missing. Tuning results are unaffected, only stored models contained wrong hyperparameter values (#223). - Improved output log (#218).
- Maintenance release.
- Initial prototype.