Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Online and Differentially-Private Tensor Decomposition
Authors: Yining Wang, Anima Anandkumar
NeurIPS 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical verification of noise conditions and comparison with whitening techniques We verify our improved noise conditions for robust tensor power method on simulation tensor data. In particular, we consider three noise models and demonstrate varied asymptotic noise magnitudes at which tensor power method succeeds. The simulation results nicely match our theoretical findings and also suggest, in an empirical way, tightness of noise bounds in Theorem 2.2. Due to space constraints, simulation results are placed in Appendix A. |
| Researcher Affiliation | Academia | Yining Wang Machine Learning Department Carnegie Mellon University EMAIL Animashree Anandkumar Department of EECS University of California, Irvine EMAIL |
| Pseudocode | Yes | Algorithm 1 Robust tensor power method [1], Algorithm 2 Online robust tensor power method, Algorithm 3 Differentially private robust tensor power method |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | No | Numerical verification of noise conditions... on simulation tensor data. The paper focuses on theoretical analysis and simulations, but does not provide access information for a specific public dataset or the simulated data used. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values or training configurations. |