Online and Differentially-Private Tensor Decomposition
Authors: Yining Wang, Anima Anandkumar
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical verification of noise conditions and comparison with whitening techniques We verify our improved noise conditions for robust tensor power method on simulation tensor data. In particular, we consider three noise models and demonstrate varied asymptotic noise magnitudes at which tensor power method succeeds. The simulation results nicely match our theoretical findings and also suggest, in an empirical way, tightness of noise bounds in Theorem 2.2. Due to space constraints, simulation results are placed in Appendix A. |
| Researcher Affiliation | Academia | Yining Wang Machine Learning Department Carnegie Mellon University yiningwa@cs.cmu.edu Animashree Anandkumar Department of EECS University of California, Irvine a.anandkumar@uci.edu |
| Pseudocode | Yes | Algorithm 1 Robust tensor power method [1], Algorithm 2 Online robust tensor power method, Algorithm 3 Differentially private robust tensor power method |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | No | Numerical verification of noise conditions... on simulation tensor data. The paper focuses on theoretical analysis and simulations, but does not provide access information for a specific public dataset or the simulated data used. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values or training configurations. |