Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Curve Your Enthusiasm: Concurvity Regularization in Differentiable Generalized Additive Models

Authors: Julien Siems, Konstantin Ditschuneit, Winfried Ripken, Alma Lindborg, Maximilian Schambach, Johannes Otterbach, Martin Genzel

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of our regularizer in experiments on synthetic as well as real-world datasets for time series and tabular data.
Researcher Affiliation Collaboration Julien Siems University of Freiburg EMAIL Konstantin Ditschuneit* Scenarium AI EMAIL Winfried Ripken* Merantix Momentum EMAIL Alma Lindborg* Merantix Momentum EMAIL Maximilian Schambach Merantix Momentum EMAIL Johannes S. Otterbach nyonic EMAIL Martin Genzel Merantix Momentum EMAIL
Pseudocode No The paper does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes 1Code: https://github.com/merantix-momentum/concurvity-regularization
Open Datasets Yes Boston Housing [23], California Housing [37], Adult [18], MIMIC-II [29], MIMIC-III [27] and Support2 [14].
Dataset Splits Yes We sample 10,000 datapoints from the model and use 7,000, 2,000, 1,000 for training, validation and testing respectively. We use the validation split to find adequate hyperparameters via a small manual search.
Hardware Specification Yes The results are shown in Figure 10, all obtained with an M1 Mac Book Pro.
Software Dependencies No The paper mentions several software components and libraries like "Adam W optimizer", "Cosine Annealing", "py GAM", "Optuna", "TensorFlow", "JAX", and "PyTorch", but it does not specify their version numbers.
Experiment Setup Yes The hyperparameter space and default parameters are shown in Table 1 and the hyperparameters per dataset are shown in Figure 7. (Table 1 includes Learning Rate [1e-4, 1e-1], Weight Decay [1e-6, 1], Activation [ELU, GELU, Re LU], # of neurons per layer [2, 256], # of hidden layers [1, 6], Num. Epochs [10, 500])