Robust Subspace Approximation in a Stream
Authors: Roie Levin, Anish Prasad Sevekari, David Woodruff
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we empirically demonstrate the effectiveness of COARSEAPPROX compared to the truncated SVD. We experiment on synthetic and real world data sets. |
| Researcher Affiliation | Academia | 1 Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213 2 Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213 |
| Pseudocode | Yes | Algorithm 1 COARSEAPPROX, Algorithm 2 (1 + ϵ)-APPROX, Algorithm 3 BOOTSTRAPCORESET |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We experiment on synthetic and real world data sets. ... two real world datasets from the UCI Machine Learning Repository: Glass is a 214 9 matrix representing attributes of glass samples, and E.Coli is a 336 7 matrix representing attributes of various proteins. |
| Dataset Splits | No | The paper describes the datasets used for experiments but does not provide specific details on training, validation, and test splits (e.g., percentages, sample counts, or explicit standard splits). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers used for implementation (e.g., programming languages, libraries, or solvers with version numbers). |
| Experiment Setup | No | The paper mentions running the randomized algorithm 20 times and using a heuristic extension, but it does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, epochs, optimizer settings) or other system-level training configurations. |