Generative Principal Component Analysis

Authors: Zhaoqiang Liu, Jiulong Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on various image datasets for spiked matrix and phase retrieval models, and illustrate performance gains of our method to the classic power method and the truncated power method devised for sparse principal component analysis.
Researcher Affiliation Collaboration Zhaoqiang Liu National University of Singapore dcslizha@nus.edu.sg Jiulong Liu Chinese Academy of Sciences jiulongliu@lsec.cc.ac.cn Subhroshekhar Ghosh National University of Singapore subhrowork@gmail.com Jun Han PCG, Tencent junhanjh@tencent.com Jonathan Scarlett National University of Singapore scarlett@comp.nus.edu.sg
Pseudocode Yes Algorithm 1 A projected power method for GPCA (PPower) Input: V, number of iterations T, pre-trained generative model G, initial vector w(0) Procedure: Iterate w(t+1) = PG Vw(t) for t {0, 1, . . . , T 1}, and return w(T )
Open Source Code Yes All experiments are run using Python 3.6 and TensorFlow 1.5.0, with a NVIDIA Ge Force GTX 1080 Ti 11GB GPU. The corresponding code is available at https://github.com/liuzq09/Generative_PCA.
Open Datasets Yes The experiments are performed on the MNIST (Le Cun et al., 1998), Fashion-MNIST (Xiao et al., 2017) and Celeb A (Liu et al., 2015) datasets, with the numerical results for the Fashion-MNIST and Celeb A datasets being presented in Appendix H and I.
Dataset Splits No The paper mentions a "test set" for evaluation and uses pre-trained generative models, but does not explicitly provide the training/validation/test dataset splits for its own experiments (e.g., percentages or sample counts for each split).
Hardware Specification Yes All experiments are run using Python 3.6 and TensorFlow 1.5.0, with a NVIDIA Ge Force GTX 1080 Ti 11GB GPU.
Software Dependencies Yes All experiments are run using Python 3.6 and TensorFlow 1.5.0, with a NVIDIA Ge Force GTX 1080 Ti 11GB GPU.
Experiment Setup Yes For all three algorithms, the total number of iterations T is set to be 30. ... The VAE is trained by the Adam optimizer with a minibatch size of 100 and a learning rate of 0.001. The projection step PG( ) is solved by the Adam optimizer with a learning rate of 0.03 and 200 steps. ... we choose a relatively large q, namely q = 150. ... The Adam optimizer with 100 steps and a learning rate of 0.1 is used for the projection operator.