Automatic Model Selection for Probabilistic {PCA}
Abstract
The Mixture of Probabilistic Principal Components Analyzers (MPPCA) is a multivariate analysis technique which defines a Gaussian probabilistic model at each unit. The number of units and principal directions in each unit is not learned in the original approach. Variational Bayesian approaches have been proposed for this purpose, which rely on assumptions on the input distribution and/or approximations of certain statistics. Here we present a different way to solve this problem, where cross-validation is used to guide the search for an optimal model selection. This allows to learn the model architecture without the need of any assumptions other than those of the basic PPCA framework. Experimental results are presented, which show the probability density estimation capabilities of the proposal with high dimensional data. © Springer-Verlag Berlin Heidelberg 2007.
Cites
The following graph plots the number of cites received by this work from its publication, on a yearly basis.
Citation
Please, cite this work as:
[Lóp+07] E. López-Rubio, J. Ortiz-De-Lazcano-Lobato, D. López-Rodríguez, et al. “Automatic Model Selection for Probabilistic PCA”. In: Computational and Ambient Intelligence, 9th International Work-Conference on Artificial Neural Networks, IWANN 2007, San Sebastián, Spain, June 20-22, 2007, Proceedings. Ed. by F. S. Hernández, A. Prieto, J. Cabestany and M. Gra~na. Vol. 4507 LNCS. Lecture Notes in Computer Science. cited By 0; Conference of 9th International Work-Conference on Artificial Neural Networks, IWANN 2007 ; Conference Date: 20 June 2007 Through 22 June 2007; Conference Code:71094. San Sebastian: Springer Verlag, 2007, pp. 127-134. DOI: 10.1007/978-3-540-73007-1_16. URL: https://doi.org/10.1007/978-3-540-73007-1_16.