Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Memory and Computation Efficient PCA via Very Sparse Random Projections

Authors: Farhad Pourkamali Anaraki, Shannon Hughes

ICML 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experimental results demonstrating that this approach allows for simultaneously achieving a substantial reduction of the computational complexity and memory/storage space, with little loss in accuracy, particularly for very high-dimensional data.
Researcher Affiliation Academia Farhad Pourkamali-Anaraki EMAIL Shannon M. Hughes EMAIL Department of Electrical, Computer, and Energy Engineering, University of Colorado at Boulder, CO, 80309, USA
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating the release of its source code.
Open Datasets Yes Finally, we consider the MNIST dataset to see a real-world application outside the spiked covariance model. This dataset contains 70,000 samples of handwritten digits, which we have resized to 40 40 pixels. Hence, we have 70,000 samples in R1600.
Dataset Splits No The paper does not specify exact training, validation, or test dataset splits. It mentions the total number of samples for MNIST but no partitioning.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments.
Software Dependencies No The paper mentions 'MATLAB s svds' but does not specify a version number for MATLAB or any other software dependencies with their versions.
Experiment Setup No The paper discusses parameters like SNR (Signal-to-Noise Ratio), measurement ratio (m/p), and compression factor (γ) that are integral to its method, but it does not provide specific hyperparameter values or system-level training settings typically found in experimental setups (e.g., learning rates, batch sizes, optimizers).