Skip to content

Active learning of GP hyperparameters following Garnett, et al., "Active Learning of Linear Embeddings for Gaussian Processes," (UAI 2014).

License

Notifications You must be signed in to change notification settings

rmgarnett/active_gp_hyperlearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Active GP Hyperparameter Learning

This is a MATLAB implementation of the method for actively learning GP hyperparameters described in

Garnett, R., Osborne, M., and Hennig, P. Active Learning of Linear Embeddings for Gaussian Processes. (2014). 30th Conference on Uncertainty in Artificial Intelligence (UAI 2014).

Given a GP model on a function f:

p(f | \theta) = GP(f; mu(x; \theta), K(x, x'; \theta))

this routine sequentially chooses a sequence of locations X = {xi} to make observations with the goal of learning the GP hyperparameters θ as quickly as possible. This is done by maintaining a probabilistic belief p(θ | D) and selecting each observation location by maximizing the Bayesian active learning by disagreement (BALD) criterion described in

N. Houlsby, F. Huszar, Z. Ghahramani, and M. Lengyel. Bayesian Active Learning for Classification and Preference Learning. (2011). arXiv preprint arXiv:1112.5745 [stat.ML].

This implementation uses the approximation to BALD described in the Garnett, et al. paper above, which relies on the "marginal GP" (MGP) method for approximate GP hyperparameter marginalization.

The main entrypoint is learn_gp_hyperparameters.m. See demo/demo.m for a simple example usage.

Dependencies

This code is written to be interoperable with the GPML MATLAB toolbox, available here:

http://www.gaussianprocess.org/gpml/code/matlab/doc/

The GPML toolbox must be in your MATLAB path for this function to work. This function also depends on the gpml_extensions repository, available here:

https://github.com/rmgarnett/gpml_extensions/

as well as the marginal GP (MGP) implementation available here:

https://github.com/rmgarnett/mgp/

Both must be in your MATLAB path. Finally, the optimization of the GP log posterior requires Mark Schmidt's minFunc function:

http://www.di.ens.fr/~mschmidt/Software/minFunc.html

About

Active learning of GP hyperparameters following Garnett, et al., "Active Learning of Linear Embeddings for Gaussian Processes," (UAI 2014).

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages