Incremental Hyperparameter optimization in GPy classifier #939
-
Hi! Is there any way to do an epoch-wise incremental gradient descent hyperparameter optimization for the Gaussian Process class GPy.core.gp under the GPy package? I am familiar with the complete optimization function model.optimize(), but unable to find any clue for incremental learning, as is supported by partial_fit() methods in sklearn estimators. Any clue or help in this is highly appreciated. Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
GPy model memorizes the parameter values so that if you keep the same GPy model instance, |
Beta Was this translation helpful? Give feedback.
GPy model memorizes the parameter values so that if you keep the same GPy model instance,
model.optimize()
will start from the previous parameters. If you do not want to optimize until convergence, you can specify the maximum number of iterations.