On Optimal Generalizability in Parametric Learning
Authors: Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, Vahid Tarokh
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as well as our proposed framework for the optimization of the regularizer. |
| Researcher Affiliation | Academia | School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA. Department of Industrial and Systems Engineering, University of Southern California, Los Angeles, CA 90089, USA. |
| Pseudocode | Yes | Algorithm 1 Approximate gradient descent algorithm for tuning λ; Algorithm 2 Stochastic (online) approximate gradient descent algorithm for tuning λ |
| Open Source Code | No | The paper does not provide any specific links to source code repositories or explicitly state that the code is publicly available. |
| Open Datasets | Yes | We applied logistic regression on MNIST and CIFAR-10 image datasets |
| Dataset Splits | Yes | A classical cross validation strategy is the leave-one-out cross validation (LOOCV) where one sample is left out for validation and training is done on the rest of the samples that are presented to the learner, and this process is repeated on all of the samples. |
| Hardware Specification | No | The paper only mentions 'on a PC' without specifying any details about the CPU, GPU, or memory. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers. |
| Experiment Setup | Yes | We initialize the algorithm with λ1 = ... = λ1 50 = 1/3 and compute ACV using Theorem 1. ... Initialize the tuning parameter λ0, choose a step-size selection rule, and set t = 0. |