Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

Authors: Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the effectiveness of our results for non-differentiable cases. We present extensive simulations to confirm the accuracy of our formulas on various important machine learning models.
Researcher Affiliation Collaboration 1Department of Statistics, Columbia University, New York, USA 2Mathematics Department and Operation Research Center, Massachusetts Institute of Technology, Massachusetts, USA 3Google Research, New York, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at github.com/wendazhou/alocv-package.
Open Datasets No The paper mentions evaluating performance on 'real-world datasets' and for models like LASSO, SVM, fused LASSO, and nuclear norm. However, it defers detailed information about these datasets to Appendix F, which is not provided. Therefore, concrete access information for specific datasets is not available in the main text.
Dataset Splits Yes One common choice is k-fold cross validation... An alternative choice of cross validation is LOOCV... In this example, 5-fold CV exhibits significant bias, wherease ALO is unbiased.
Hardware Specification No The paper mentions 'computing resources from Columbia University’s Shared Research Computing Facility project,' but does not specify any exact hardware details such as GPU/CPU models, memory, or specific cloud instances used for experiments.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., 'Python 3.8, PyTorch 1.9'). It refers to general tools like 'scikit-learn' or 'glmnet for matlab' in its references, but not as specific software used with versions for its own experiments.
Experiment Setup No The paper states: 'The full details of the experiments are provided in Appendix F.' Therefore, specific experimental setup details such as hyperparameters or training configurations are not included in the main text.