ConR: Contrastive Regularizer for Deep Imbalanced Regression
Authors: Mahsa Keramati, Lili Meng, R. David Evans
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our comprehensive experiments show that Con R significantly boosts the performance of all the state-of-the-art methods on four large-scale deep imbalanced regression benchmarks. |
| Researcher Affiliation | Collaboration | Mahsa Keramati1,2 , Lili Meng1, R. David Evans1 1 Borealis AI, 2 School of Computing Science, Simon Fraser University |
| Pseudocode | Yes | Algorithm 1 Con R: Contrastive regularizer for deep imbalanced regression |
| Open Source Code | Yes | Our code is publicly available in https://github.com/Borealis AI/Con R. |
| Open Datasets | Yes | We use three datasets curated by Yang et al. (2021) for the deep imbalanced regression problem: Age DB-DIR is a facial age estimation benchmark, created based on Age DB (Moschoglou et al., 2017). IMDB-WIKI-DIR is an age estimation dataset originated from IMDB-WIKI (Rothe et al., 2018). NYUD2-DIR is created based on NYU Depth Dataset V2 (Silberman et al., 2012) to predict the depth maps from RGB indoor scenes. Moreover, we create MPIIGaze-DIR based on MPIIGaze, which is an appearance-based gaze estimation benchmark. |
| Dataset Splits | Yes | IMDB-WIKI (Rothe et al., 2018) has 191.5K images for training, and 11.0K images for validation and testing, respectively. |
| Hardware Specification | Yes | We use four NVIDIA Ge Force GTX 1080 Ti GPU to train all models. |
| Software Dependencies | No | The paper mentions software components like Resnet50, Adam optimizer, and Le Net, but does not provide specific version numbers for any libraries or frameworks used. |
| Experiment Setup | Yes | The batch size is 64 and the learning rate is 2.5 10 4 and decreases by 10 at epoch 60 and epoch 80. We use the Adam optimizer with a momentum of 0.9 and a weight decay of 1e-4. Following the baselines (Yang et al., 2021) the loss function for regression LR is Mean Absolute Error(MAE). All the models are trained for 90 epochs. |