Efficient learning of nonlinear prediction models with time-series privileged information

Authors: Bastian Jung, Fredrik D. Johansson

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A suite of empirical results confirm theoretical findings and show the potential of using privileged time-series information in nonlinear prediction.
Researcher Affiliation Academia Bastian Jung Chalmers University of Technology mail@bastianjung.com Fredrik D. Johansson Chalmers University of Technology fredrik.johansson@chalmers.se
Pseudocode Yes Algorithm 1: Generalized Lu PTS
Open Source Code Yes Code to reproduce all results is available at https://github.com/Healthy-AI/glupts.
Open Datasets Yes The Metro Interstate traffic volume data set (Traffic) (Hogue, 2012) contains hourly records of the traffic volume on the interstate 94 between Minneapolis and St. Paul, MN. ... The anonymized data were obtained through the Alzheimer s Disease Neuroimaging Initiative (ADNI) (ADNI, 2004).
Dataset Splits Yes In each repetition, a given model performs hyperparameter tuning on the training data using random search and five-fold cross-validation before being re-trained on all training data.
Hardware Specification Yes All experiments required less than 3000 GPU-h to complete using NVIDIA Tesla T4 GPUs.
Software Dependencies No The paper mentions software like PyTorch and scikit-learn and provides citations for them, but does not specify their version numbers used in the experiments.
Experiment Setup Yes In each repetition, a given model performs hyperparameter tuning on the training data using random search and five-fold cross-validation before being re-trained on all training data. ... For tabular data, their encoder is a multi-layer perceptron with three hidden layers of 25 neurons each. For the image data they use Le Net-5 (Le Cun et al., 1989). ... The results presented were found to be robust to small changes in training parameters such as learning rate.