Automatic Model Selection for Probabilistic {PCA}

Principal component analysis
Neural networks
Authors

Ezequiel López-Rubio

Juan Miguel Ortiz-De-Lazcano-Lobato

Domingo López-Rodríguez

María del Carmen Vargas-González

Published

1 January 2007

Publication details

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), (4507 LNCS), pp. 127-134

Links

DOI

 

Abstract

The Mixture of Probabilistic Principal Components Analyzers (MPPCA) is a multivariate analysis technique which defines a Gaussian probabilistic model at each unit. The number of units and principal directions in each unit is not learned in the original approach. Variational Bayesian approaches have been proposed for this purpose, which rely on assumptions on the input distribution and/or approximations of certain statistics. Here we present a different way to solve this problem, where cross-validation is used to guide the search for an optimal model selection. This allows to learn the model architecture without the need of any assumptions other than those of the basic PPCA framework. Experimental results are presented, which show the probability density estimation capabilities of the proposal with high dimensional data. © Springer-Verlag Berlin Heidelberg 2007.

Cites

The following graph plots the number of cites received by this work from its publication, on a yearly basis.

Citation

Please, cite this work as:

[Lóp+07] E. López-Rubio, J. Ortiz-De-Lazcano-Lobato, D. López-Rodríguez, et al. “Automatic Model Selection for Probabilistic PCA”. In: Computational and Ambient Intelligence, 9th International Work-Conference on Artificial Neural Networks, IWANN 2007, San Sebastián, Spain, June 20-22, 2007, Proceedings. Ed. by F. S. Hernández, A. Prieto, J. Cabestany and M. Gra~na. Vol. 4507 LNCS. Lecture Notes in Computer Science. cited By 0; Conference of 9th International Work-Conference on Artificial Neural Networks, IWANN 2007 ; Conference Date: 20 June 2007 Through 22 June 2007; Conference Code:71094. San Sebastian: Springer Verlag, 2007, pp. 127-134. DOI: 10.1007/978-3-540-73007-1_16. URL: https://doi.org/10.1007/978-3-540-73007-1_16.

@InProceedings{LopezRubio2007b,
     author = {E. López-Rubio and J.M. Ortiz-De-Lazcano-Lobato and D. López-Rodríguez and M. {Del Carmen Vargas-González}},
     booktitle = {Computational and Ambient Intelligence, 9th International Work-Conference on Artificial Neural Networks, {IWANN} 2007, San Sebastián, Spain, June 20-22, 2007, Proceedings},
     title = {Automatic Model Selection for Probabilistic {PCA}},
     year = {2007},
     address = {San Sebastian},
     editor = {Francisco Sandoval Hernández and Alberto Prieto and Joan Cabestany and Manuel Gra{~n}a},
     note = {cited By 0; Conference of 9th International Work-Conference on Artificial Neural Networks, IWANN 2007 ; Conference Date: 20 June 2007 Through 22 June 2007; Conference Code:71094},
     pages = {127-134},
     publisher = {Springer Verlag},
     series = {Lecture Notes in Computer Science},
     volume = {4507 LNCS},
     abstract = {The Mixture of Probabilistic Principal Components Analyzers (MPPCA) is a multivariate analysis technique which defines a Gaussian probabilistic model at each unit. The number of units and principal directions in each unit is not learned in the original approach. Variational Bayesian approaches have been proposed for this purpose, which rely on assumptions on the input distribution and/or approximations of certain statistics. Here we present a different way to solve this problem, where cross-validation is used to guide the search for an optimal model selection. This allows to learn the model architecture without the need of any assumptions other than those of the basic PPCA framework. Experimental results are presented, which show the probability density estimation capabilities of the proposal with high dimensional data. © Springer-Verlag Berlin Heidelberg 2007.},
     author_keywords = {Cross-validation; Dimensionality reduction; Handwritten digit recognition; Probabilistic Principal Components Analysis (PPCA)},
     bibsource = {dblp computer science bibliography, https://dblp.org},
     biburl = {https://dblp.org/rec/conf/iwann/Lopez-RubioOLV07.bib},
     document_type = {Conference Paper},
     doi = {10.1007/978-3-540-73007-1_16},
     journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
     keywords = {Approximation theory; Bayesian networks; Gaussian distribution; Mathematical models; Probability density function, Multivariate analysis; Optimal model selection, Principal component analysis},
     source = {Scopus},
     url = {https://doi.org/10.1007/978-3-540-73007-1_16},
}