From Predictions to Decisions: Using Lookahead Regularization

Authors: Nir Rosenfeld, Anna Hilgard, Sai Srivatsa Ravindranath, David C. Parkes

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report the results of experiments on real and synthetic data that show the effectiveness of this approach. and 3 Experiments In this section, we evaluate our approach in three experiments of increasing complexity and scale, where the first is synthetic and the latter two use real data.
Researcher Affiliation Academia Nir Rosenfeld Faculty of Computer Science Technion Israel Institute of Technology nirr@cs.technion.ac.il Sophie Hilgard School of Engineering and Applied Science Harvard University ash798@g.harvard.edu Sai S. Ravindranath School of Engineering and Applied Science Harvard University saisr@g.harvard.edu David C. Parkes School of Engineering and Applied Science Harvard University parkes@eecs.harvard.edu
Pseudocode No The paper has a section titled '2.3 Algorithm' that describes the alternating optimization steps, but it does so in paragraph form without a structured pseudocode or algorithm block.
Open Source Code Yes Our code can be found at https://github.com/papushado/lookahead.
Open Datasets Yes The second experiment focuses on wine quality using the wine dataset from the UCI data repository [10]. and The final experiment focuses on the prediction of diabetes progression using the diabetes dataset7 [12].
Dataset Splits No For the wine experiment: 'The active set includes 30% of the data, and is further split 75-25 into a train set used for learning and tuning and a held-out test set used for final evaluation.' For the diabetes experiment: 'train and test sets are sampled uniformly from the data.' The paper specifies train and test sets but does not explicitly define a separate validation split or its size.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper discusses types of models and methods used (e.g., 'linear models', 'Ridge Regression', 'generalized additive model (GAM) with splines', 'Bootstrapping', 'Quantile regression'), but it does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes For the wine experiment, the paper states: 'The baseline includes a linear fbase trained with ℓ2 regularization (i.e., Ridge Regression) with regularization coefficient α 0. Our lookahead model includes a linear flook trained with lookahead regularization (Eq. (4)) with regularization coefficient λ 0.' It also specifies 'decision step-size η = 0.5' and 'η = 2'. For the diabetes experiment, it states 'set η = 5'.