Precise expressions for random projections: Low-rank approximation and randomized Newton

Authors: Michal Derezinski, Feynman T. Liang, Zhenyu Liao, Michael W. Mahoney

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we numerically verify the accuracy of our theoretical predictions for the low-rank approximation error of sketching on benchmark datasets from the libsvm repository
Researcher Affiliation Academia MichaƂ Derezi nski Department of Statistics University of California, Berkeley mderezin@berkeley.edu Feynman Liang Department of Statistics University of California, Berkeley feynman@berkeley.edu Zhenyu Liao ICSI and Department of Statistics University of California, Berkeley zhenyu.liao@berkeley.edu Michael W. Mahoney ICSI and Department of Statistics University of California, Berkeley mmahoney@stat.berkeley.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for their methodology or links to code repositories.
Open Datasets Yes We numerically verify the accuracy of our theoretical predictions for the low-rank approximation error of sketching on benchmark datasets from the libsvm repository [CL11]
Dataset Splits No The paper mentions using benchmark datasets but does not explicitly provide specific details on how these datasets were split into training, validation, and test sets (e.g., percentages, sample counts, or explicit standard split citations).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments.
Software Dependencies No The paper mentions using the "libsvm repository" for datasets, which refers to a software library, but it does not provide specific version numbers for LIBSVM or any other software components (e.g., programming languages, libraries, frameworks) used in the experiments.
Experiment Setup No The paper does not provide specific details about the experimental setup, such as hyperparameter values (e.g., learning rate, batch size) or training configurations for the models or algorithms used in the empirical evaluations.