Session B5 - Random Matrices - Semi-plenary talk
July 14, 15:30 ~ 16:20
Variations on PCA
Princeton University, United States - email@example.com
Principal component analysis, or PCA, is a data analysis method used in nearly every scientific discipline. It allows the scientist to discover the main directions of variability in a dataset, known as the principal components. The principal components are used as factors to model the data, denoise and compress the data, and for visualization, clustering, or any other further analysis of the data.
Despite enormous progress in recent years on PCA in high dimensions, the theory and methods developed until now are mostly limited to simple applications involving the sample covariance matrix. Current methods for PCA are either statistically suboptimal or are not computationally scalable for data that is corrupted by non-additive, non-Gaussian random effects, as is routinely encountered in problems involving missing data, image deconvolution, and Poissonian noise.
In this talk we will discuss two variants of PCA for which we recently developed new statistical and computational frameworks: 1) PCA for exponential family distributions, and 2) PCA from noisy linearly reduced measurements.
We will discuss applications to cryo-electron microscopy, X-ray free electron lasers, low-rank matrix completion, and noisy deconvolution that motivated this work.
Joint work with Lydia Liu (Princeton University), William Leeb (Princeton University), Edgar Dobriban (Stanford), Joakim Anden (Princeton University), Tejal Bhamre (Princeton University), Teng Zhang (University Central Florida).