Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing

Authors: Ziyan Wang, Hao Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments in several real-world datasets show that our VIR can outperform state-of-the-art imbalanced regression models in terms of both accuracy and uncertainty estimation.
Researcher Affiliation Academia Ziyan Wang Georgia Institute of Technology wzy@gatech.edu Hao Wang Rutgers University hw488@cs.rutgers.edu
Pseudocode No The paper describes methods using prose and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code No Code will soon be available at https: //github.com/Wang-ML-Lab/variational-imbalanced-regression.
Open Datasets Yes We evaluate our methods in terms of prediction accuracy and uncertainty estimation on four imbalanced datasets1, Age DB-DIR [30], IMDB-WIKI-DIR [33], STS-B-DIR [7], and NYUD2-DIR [35].
Dataset Splits Yes Age DB-DIR: We use Age DB-DIR constructed in DIR [49], which contains 12.2K images for training and 2.1K images for validation and testing.
Hardware Specification No The paper mentions receiving 'Amazon Web Service for providing cloud computing credit' but does not provide specific hardware details like GPU or CPU models used for the experiments.
Software Dependencies No We use Py Torch to implement our method.
Experiment Setup Yes We use the Adam optimizer [24] to train all models for 100 epochs, with same learning rate and decay by 0.1 and the 60-th and 90-th epoch, respectively. In order to determine the optimal batch size for training, we try different batch sizes and corroborate the conclusion from [49], i.e., the optimal batch size is 256 when other hyperparameters are fixed.