Online Adaptive Principal Component Analysis and Its extensions
Authors: Jianjun Yuan, Andrew Lamperski
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate both theoretically and experimentally that the proposed algorithms can adapt to the changing environments. and In the Section 6 on experiments1, we also test our algorithm s effectiveness. In particular, we show that our proposed algorithm can adapt to the changing environment faster than the previous online PCA algorithm. |
| Researcher Affiliation | Academia | 1University of Minnesota, Minneapolis, USA. |
| Pseudocode | Yes | Algorithm 1 Adaptive Best Subset of Experts, Algorithm 2 Mixture Decomposition (Warmuth & Kuzmin, 2008), Algorithm 3 Capping Algorithm (Warmuth & Kuzmin, 2008), Algorithm 4 Uncentered online adaptive PCA, Algorithm 5 Online adaptive variance minimization over unit sphere, Algorithm 6 Online adaptive variance minimization over simplex |
| Open Source Code | Yes | code available at https://github.com/yuanx270/onlineadaptive-PCA |
| Open Datasets | Yes | The second example uses the practical dataset Yale-B which is a collection of face images. and In this toy example, we create the synthetic data samples coming from changing subspace/environment, which is a similar setup as in (Warmuth & Kuzmin, 2008). |
| Dataset Splits | No | The paper mentions a temporal splitting of the Yale-B dataset ('The data is split into 20 time intervals corresponding to 20 different people.') but does not specify traditional training, validation, or test dataset splits with percentages, sample counts, or explicit cross-validation methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiments. |
| Experiment Setup | Yes | We use k = 2, which is the same as the previous example. The stepsize η is also tuned heuristically like the previous example, which is equal to 5 and α = 1e 4. and We can tune the stepsize heuristically in practice and in this example we just use η = 1 and α = 1e 5. |