Exponentially convergent stochastic k-PCA without variance reduction

Authors: Cheng Tang

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present our empirical evaluation of Algorithm 1 to understand its convergence property on low-rank or effectively low-rank datasets 2. We first verified its performance on simulated low-rank data and effectively low-rank data, and then we evaluated its performance on two real-world effectively low-rank datasets.
Researcher Affiliation Industry Amazon AI New York, NY, 10001 tcheng@amazon.com
Pseudocode Yes Algorithm 1 Matrix Krasulina
Open Source Code Yes Code will be available at https://github.com/chengtang48/neurips19.
Open Datasets Yes For MNIST [29], we use the 60000 training examples of digit pixel images, with d = 784.
Dataset Splits No The paper mentions using 60000 training examples for MNIST but does not explicitly detail any train/validation/test splits, specific percentages, or how validation was performed.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes We initialized Algorithm 1 with a random matrix W o and ran it for one or a few epochs, each consists of 5000 iterations. We compare Algorithm 1 against the exponentially convergent VR-PCA: we initialize the algorithms with the same random matrix and we train (and repeated for 5 times) using the best constant learning rate we found empirically for each algorithm.