A Functional Extension of Semi-Structured Networks

Authors: David Rügamer, Bernard Liew, Zainab Altai, Almond Stöcker

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical experiments demonstrate that this approach accurately recovers underlying signals, enhances predictive performance, and performs favorably compared to competing methods.
Researcher Affiliation Academia David Rügamer Department of Statistics, LMU Munich Munich Center for Machine Learning (MCML) Munich, Germany david@stat.uni-muenchen.de Bernard X.W. Liew, Zainab Altai School of Sport, Rehabilitation and Exercise Sciences University of Essex Colchester, UK [bl19622,z.altai]@essex.ac.uk Almond Stöcker Institute of Mathematics École Polytechnic Fédéral de Lausanne (EPFL) Lausanne, Switzerland almond.stoecker@epfl.ch
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes A prototypical implementation is available as an add-on package of deepregression [51] at https://github.com/ neural-structured-additive-learning/funnel.
Open Datasets Yes The data analyzed in the first experiment is a collection of three publicly available running datasets [14, 27, 28].
Dataset Splits No No explicit validation dataset split is mentioned (only train/test splits are provided).
Hardware Specification Yes All computations were performed on a user PC with Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz, 8 cores, 16 GB RAM using Python 3.8, R 4.2.1, and Tensor Flow 2.10.0.
Software Dependencies Yes All computations were performed on a user PC with Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz, 8 cores, 16 GB RAM using Python 3.8, R 4.2.1, and Tensor Flow 2.10.0.
Experiment Setup Yes In all experiments, we use the Adam optimizer with default hyperparameters. No additional learning rate schedule was used. The batch size, maximum number of epochs and early stopping patience was adjusted depending on the size of the dataset.