Feature Importance Ranking for Deep Learning

Authors: Maksymilian Wojtas, Ke Chen

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A thorough evaluation on synthetic, benchmark and real data sets suggests that our approach outperforms several state-of-the-art feature importance ranking and supervised feature selection methods.
Researcher Affiliation Academia Maksymilian A. Wojtas Ke Chen Department of Computer Science, The University of Manchester, Manchester M13 9PL, U.K. {maksymilian.wojtas,ke.chen}@manchester.ac.uk
Pseudocode Yes while the pseudo code can be found from Sect. D in supplementary materials.
Open Source Code Yes Our source code is available: https://github.com/maksym33/Feature Importance DL
Open Datasets Yes Our first evaluation employs 3 synthetic datasets in literature [17, 11] for feature selection regarding regression and binary/multiclass classification... MNIST Dataset [21]... glass [22], vowel [22], TOX-171 [23] and yale [24]... GM12878 cell line (200 dp)...
Dataset Splits Yes we always use 5-fold cross-validation for evaluation and report the performance statistics, i.e., mean and standard deviation estimated on 5 folds.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions types of models used (e.g., MLP, CNN, kernel SVMs) but does not provide specific software library names with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, scikit-learn 0.x).
Experiment Setup No The paper states 'the details of all the experimental settings can be found from Sect. A in Supplementary Materials,' but the main text itself does not include specific hyperparameters or system-level training settings.