Probabilistic Non-Linear Principal Component Analysis with Gaussian Process Latent Variables

Posted in Science on August 22, 2008


Probabilistic Non-Linear Principal Component Analysis with Gaussian Process Latent Variables

It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a for a covariance function we can develop a non-linear probabilistic PCA.

Author: Neil Lawrence, University Of Manchester

Watch Video

Tags: Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net, Gaussian Processes