Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approach

Authors: Mao Ye, Yan Sun

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on simulated and real world datasets show the efficiency of our method in terms of variable selection and prediction accuracy.
Researcher Affiliation Academia 1Department of Statistics, Purdue University, West Lafayette, IN, USA. Correspondence to: Mao Ye <ye207@purdue.edu>.
Pseudocode Yes Algorithm 1 Training Penalized Neural Network; Algorithm 2 Greedy Elimination Method
Open Source Code No The paper mentions 'Details on implementation for all experiments are in the supplementary material.' but does not explicitly state that source code for their methodology is provided or offer a specific link.
Open Datasets Yes CCLE is taken from (Liang et al., 2017) and the other 3 datasets are from UCI machine learning repository.
Dataset Splits Yes Each dataset consists of 600 observations, with 200 for training, 100 for validation and 300 for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running experiments.
Software Dependencies No The paper mentions software components like 'tanh as the activation function' and refers to algorithms like 'GIST algorithm' and 'block-wise descent algorithm', but it does not specify version numbers for any software dependencies.
Experiment Setup Yes The network structure of Spinn and GEPNN are set to have 6 hidden units. We set the number of hidden units of Spinn and GEPNN to be 3 for CCLE, CCPP and Airfoil. Since we add nonlinear features for Boston Housing dataset, we reduce the number of hidden units to 2 for Spinn and GEPNN. We need to tune λ0, α, λ1 and thre(t) for the algorithm.