You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For many backends, training consists of two distinctive steps: preparing the training data, then training a ML model. It is possible to reuse the prepared training data using the --cached option and just retrain the ML model. But there is no support for the inverse operation: just preparing the data without training the model. This could be useful for DVC workflows (allowing more granular pipeline stages) and in situations where you intend to perform hyperparameter optimization.
The proposal is to add a --prepare-only option to the annif train command, which skips the model training part in those backends where there is a separate preparation step (i.e. those backends that currently support the --cached option).
The text was updated successfully, but these errors were encountered:
For many backends, training consists of two distinctive steps: preparing the training data, then training a ML model. It is possible to reuse the prepared training data using the
--cached
option and just retrain the ML model. But there is no support for the inverse operation: just preparing the data without training the model. This could be useful for DVC workflows (allowing more granular pipeline stages) and in situations where you intend to perform hyperparameter optimization.The proposal is to add a
--prepare-only
option to theannif train
command, which skips the model training part in those backends where there is a separate preparation step (i.e. those backends that currently support the--cached
option).The text was updated successfully, but these errors were encountered: