RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression

Authors: Yu Gong, Greg Mori, Fred Tung

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We performed extensive experimental validation of Rank Sim on three public benchmarks for deep imbalanced regression. We first describe the benchmarks, metrics, and baselines. We then present experimental results on the three benchmarks, including comparisons with state-of-the-art approaches. Finally, we present ablation studies and analysis to better understand the impact of our design choices.
Researcher Affiliation Collaboration 1Simon Fraser University 2Borealis AI.
Pseudocode No The paper provides mathematical formulations and descriptions of operations, such as rk(a), but does not present them within a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code and pretrained weights are available at https://github.com/Borealis AI/ ranksim-imbalanced-regression.
Open Datasets Yes We consider three datasets recently introduced by Yang et al. (2021). IMDB-WIKI-DIR is an age estimation dataset derived from IMDB-WIKI (Rothe et al., 2016), which consists of face images with age annotations.
Dataset Splits Yes IMDB-WIKI-DIR is an age estimation dataset derived from IMDB-WIKI (Rothe et al., 2016), which consists of face images with age annotations. It has 191,509 training samples, 11,022 validation samples and 11,022 test samples;
Hardware Specification Yes We measured the training time for Age DB-DIR on four NVIDIA Ge Force GTX 1080 Ti GPUs.
Software Dependencies No The paper mentions software components such as 'Res Net-50', 'Bi LSTM', 'Glo Ve word embeddings', and 'Adam optimizer', but it does not specify their version numbers for reproducibility.
Experiment Setup Yes For all experiments, we use Res Net-50 as the backbone network. We use Adam optimizer with 0.9 momentum and 1e-4 weight decay. The learning rate is 1e-3 and the batch size is 256. We train all methods for 90 epochs with The learning rate scheduled to drop by 10 times at epoch 60 and epoch 80. The input image size is 224 by 224. For Rank Sim hyperparameters, we set the balancing weight γ as 100 and the interpolation strength λ as 2 for all experiments on IMDB-WIKI-DIR.