Long-Tailed Learning as Multi-Objective Optimization

Authors: Weiqi Li, Fan Lyu, Fanhua Shang, Liang Wan, Wei Feng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, we conduct extensive experiments on commonly used benchmarks in long-tailed learning and demonstrate the superiority of our method over existing SOTA methods.
Researcher Affiliation Academia 1College of Intelligence and Computing, Tianjin University 2CRIPAC, MAIS, CASIA
Pseudocode Yes Algorithm 1: Gradient-balanced grouping
Open Source Code Yes Our code is released at https://github.com/Wicky Lee1998/GBG v1.
Open Datasets Yes CIFAR10/100-LT. CIFAR10/100-LT are the long-tailed version of CIFAR10/100. Specifically, they are generated by downsampling CIFAR10/100 with different Imbalance Factor (IF) β = Nmax/Nmin where Nmax and Nmin are the instance size of most frequent and least frequent classes in the training set (Cui et al. 2019; Cao et al. 2019). Image Net-LT. Image Net-LT is sampled from vanilla Image Net following a Pareto distribution with the power value α = 6. It contains 115.8K training images of 1,000 categories with Nmax = 1, 280 and Nmin = 5. We use the balanced validation set of vanilla Image Net which contains 50 images per class. i Naturalist 2018. i Naturailist 2018(i Nat) is a large-scale real-world dataset that naturally presents a long-tailed distribution. It consists of 437.5K images from 8,142 classes with β = 512. The validation set contains 24.4K images with 3 images per class to test our method.
Dataset Splits Yes CIFAR10/100-LT are the long-tailed version of CIFAR10/100. Specifically, they are generated by downsampling CIFAR10/100 with different Imbalance Factor (IF) β = Nmax/Nmin where Nmax and Nmin are the instance size of most frequent and least frequent classes in the training set (Cui et al. 2019; Cao et al. 2019). Image Net-LT. ... We use the balanced validation set of vanilla Image Net which contains 50 images per class. i Naturalist 2018. ... The validation set contains 24.4K images with 3 images per class to test our method.
Hardware Specification Yes We train all the above models on NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions using SGD but does not provide specific software dependency versions (e.g., Python, PyTorch, or CUDA versions).
Experiment Setup Yes For CIFAR and Image Net-LT, weight decay (wd) is 5e-4 and momentum (m) is 0.9. For i Nat, wd is 1e-4. We set batch size as 256 for all datasets.