Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models
Authors: Puqian Wang, Nikos Zarifis, Ilias Diakonikolas, Jelena Diakonikolas
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | The paper is theoretical in nature and does not include experiments. |
| Researcher Affiliation | Academia | Puqian Wang Department of Computer Science University of Wisconsin, Madison EMAIL Nikos Zarifis Department of Computer Science University of Wisconsin, Madison EMAIL Ilias Diakonikolas Department of Computer Science University of Wisconsin, Madison EMAIL Jelena Diakonikolas Department of Computer Science University of Wisconsin, Madison EMAIL |
| Pseudocode | Yes | Algorithm 1 k-Chow Tensor PCA (page 4) and Algorithm 2 Riemannian GD with Warm-start (page 6). |
| Open Source Code | No | The paper is theoretical in nature and does not conduct experiments, nor does it provide any statement or link for open-source code release. |
| Open Datasets | No | The paper is theoretical and does not mention the use of any specific publicly available datasets for empirical training or evaluation. |
| Dataset Splits | No | The paper is theoretical and does not describe any training, validation, or test dataset splits, as it does not conduct experiments. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention specific software dependencies with version numbers for experimental reproducibility. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters or system-level training settings for empirical runs. |