Conference paper accepted: Automatic Model Selection for Probabilistic {PCA}

Principal component analysis
Neural networks
Author

E. López-Rubio, J.M. Ortiz-De-Lazcano-Lobato, Domingo López-Rodríguez, M. Del Carmen Vargas-González

Published

1 January 2007

The work Automatic Model Selection for Probabilistic {PCA} has been published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), (4507 LNCS), pp. 127-134.

Abstract:

The Mixture of Probabilistic Principal Components Analyzers (MPPCA) is a multivariate analysis technique which defines a Gaussian probabilistic model at each unit. The number of units and principal directions in each unit is not learned in the original approach. Variational Bayesian approaches have been proposed for this purpose, which rely on assumptions on the input distribution and/or approximations of certain statistics. Here we present a different way to solve this problem, where cross-validation is used to guide the search for an optimal model selection. This allows to learn the model architecture without the need of any assumptions other than those of the basic PPCA framework. Experimental results are presented, which show the probability density estimation capabilities of the proposal with high dimensional data. © Springer-Verlag Berlin Heidelberg 2007.

For more details on this work, visit its own page.