Calibrated Reliable Regression using Maximum Mean Discrepancy

Authors: Peng Cui, Wenbo Hu, Jun Zhu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on non-trivial real datasets show that our method can produce well-calibrated and sharp prediction intervals, which outperforms the related state-of-the-art methods.
Researcher Affiliation Collaboration Peng Cui1 2, Wenbo Hu1 2, Jun Zhu1 1 Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University, Beijing, 100084 China 2 Real AI
Pseudocode Yes Algorithm 1 Deep calibrated reliable regression model.
Open Source Code No The paper does not contain any statements about releasing source code for the described methodology or provide a link to a code repository.
Open Datasets Yes We use several public datasets from UCI repository [2] and Kaggle [1]: 1) for the timeseries task: Pickups, Bike-sharing, PM2.5, Metro-traffic and Quality; 2) for the regression task: Power Plant, Protein Structure, Naval Propulsion and wine.
Dataset Splits No The paper specifies training and testing splits (e.g., “70% training data and 30% test data” or “80% of each data-set for training and the rest for testing”) but does not mention a separate validation split or how it was handled.
Hardware Specification Yes on the wine dataset on GTX1080Ti.
Software Dependencies No The paper does not specify any software dependencies (e.g., libraries, frameworks) with version numbers required to replicate the experiments.
Experiment Setup Yes For time-series forecasting tasks, we construct an LSTM model with two hidden layers (128 hidden units and 64 units respectively) and a linear layer for making the final predictions. ... The details of hyperparameters setting can be found in Appendix B.2.