The Impact of Regularization on High-dimensional Logistic Regression

Authors: Fariborz Salehi, Ehsan Abbasi, Babak Hassibi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theory is validated by extensive numerical simulations across a range of parameter values and problem instances.
Researcher Affiliation Academia Fariborz Salehi, Ehsan Abbasi, and Babak Hassibi Department of Electrical Engineering California Institute of Technology Pasadena, CA, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of its source code.
Open Datasets No For our analysis we assume that the regularizer f( ) is separable, i f(wi), and the data points are drawn independently from the Gaussian distribution, {xi}n i=1 i.i.d. N(0, 1 p Ip). Note that the assumptions considered in the analysis of the We further assume that the entries of β are drawn from a distribution Π.
Dataset Splits No The paper describes synthetic data generation for simulations, stating 'For the numerical simulations, the result is the average over 100 independent trials with p = 250 and κ = 1.' It does not specify train/validation/test dataset splits, as it does not use a fixed, pre-existing dataset that would typically be split for training and validation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the numerical simulations.
Software Dependencies No The paper does not provide specific software dependencies with version numbers used for its numerical simulations.
Experiment Setup No The paper specifies parameters for numerical simulations such as 'average over 100 independent trials with p = 250 and κ = 1' and 'ϵ = 0.001'. However, these are parameters of the simulation environment and problem setup, not hyperparameter values or training configurations typically found in a machine learning experiment setup (e.g., learning rate, batch size, optimizer settings).