Corruption-Tolerant Algorithms for Generalized Linear Models
Authors: Bhaskar Mukhoty, Debojyoti Dey, Purushottam Kar
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification. extensive empirical evaluation demonstrating that despite being a generic framework, SVAM is competitive to or outperforms algorithms specifically designed to solve problems such as least-squares and logistic regression. We used a 64-bit machine with Intel Core i7-6500U CPU @ 2.50GHz, 4 cores, 16 GB RAM, Ubuntu 16.04 OS. Benchmarks. SVAM was benchmarked against several baselines |
| Researcher Affiliation | Academia | Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE Indian Institute of Technology Kanpur, Uttar Pradesh, India |
| Pseudocode | Yes | Algorithm 1: SVAM: Sequential Variance-Altered MLE, Algorithm 2: SVAM-RR: Robust Least Squares Regression, Algorithm 3: SVAM-ME: Robust Mean Estimation, Algorithm 4: SVAM-GAMMA: Robust Gamma Regression, Algorithm 5: SVAM-LR: Robust Classification |
| Open Source Code | Yes | Code for SVAM is available at https://github.com/purushottamkar/svam/ |
| Open Datasets | No | Due to lack of space, details of experimental setup, data generation, how adversaries were simulated etc are presented in Appendix C. The paper describes generating data but does not explicitly state that the generated datasets are publicly available or provide a link for them. |
| Dataset Splits | No | Hyperparameter Tuning: SVAM s two hyperparameters β1, ξ were tuned using a held-out validation set. While a validation set is mentioned, the paper does not specify the exact percentages or sample counts for the train/validation/test splits, nor does it refer to standard predefined splits with citations. |
| Hardware Specification | Yes | We used a 64-bit machine with Intel Core i7-6500U CPU @ 2.50GHz, 4 cores, 16 GB RAM, Ubuntu 16.04 OS. |
| Software Dependencies | No | The paper mentions 'Ubuntu 16.04 OS' but does not specify any other software dependencies, libraries, or solvers with their respective version numbers. |
| Experiment Setup | Yes | Hyperparameter Tuning: SVAM s two hyperparameters β1, ξ were tuned using a held-out validation set. As the validation data could also contain corruptions, validation error was calculated by rejecting the top α fraction of validation points with the highest prediction error. The true value of α was provided to competitor algorithms as a handicap but not to SVAM. Thus, α itself was treated as a (third) hyperparameter for SVAM. In Figs 1(a,b), SVAM-RR was offered hyperparameters in a wide range of values to study how it responded when provided misspecified hyperparameters. |