Regularization Learning Networks: Deep Learning for Tabular Datasets

Authors: Ira Shavitt, Eran Segal

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs. RLNs produce extremely sparse networks, eliminating up to 99.8% of the network edges and 82% of the input features, thus providing more interpretable models and reveal the importance that the network assigns to different inputs.
Researcher Affiliation Academia Ira Shavitt Weizmann Institute of Science irashavitt@gmail.com Eran Segal Weizmann Institute of Science eran.segal@weizmann.ac.il
Pseudocode No The full algorithm is described in the supplementary material.
Open Source Code Yes An open source implementation of RLN can be found at https://github.com/irashavitt/regularization_ learning_networks.
Open Datasets Yes We display the microbiome dataset, with the covariates marked, in comparison the MNIST dataset[20]. [20] Yann Le Cun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/.
Dataset Splits No Hyperparameters of the network, like λ, are usually obtained using cross-validation, which is the application of derivative-free optimization on LCV (Zt, Zv, λ) with respect to λ where LCV (Zt, Zv, λ) = L Zv, arg min W L (Zt, W, λ) and (Zt, Zv) is some partition of Z into train and validation sets, respectively. The full list of hyperparameters, the setting of the training of the models and the ensembles, as well as the description of all the input features and the measured traits, can be found in the supplementary material.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The full list of hyperparameters, the setting of the training of the models and the ensembles, as well as the description of all the input features and the measured traits, can be found in the supplementary material.