Inverse Weight-Balancing for Deep Long-Tailed Learning

Authors: Wenqi Dang, Zhou Yang, Weisheng Dong, Xin Li, Guangming Shi

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our method can greatly improve performance on imbalanced datasets such as CIFAR100-LT with different imbalance factors, Image Net-LT, and i Naturelists2018.
Researcher Affiliation Academia 1Xi Dian University, China 2West Virginia University, America 3Peng Cheng Laboratory, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes Datasets: We have carried out a series of experiments in CIFAR100-LT (Krizhevsky, Hinton et al. 2009), Image Net LT (Liu et al. 2018), i Naturalist2018 (Van Horn et al. 2018).
Dataset Splits No The paper mentions that 'the test or valid set is balanced' but does not provide specific percentages or sample counts for a distinct validation set split or refer to a standard validation split.
Hardware Specification Yes CIFAR100-LT dataset requires only one Ge Force RTX 2080 card, and the other two datasets require 8 Ge Force RTX 2080 cards due to batch size and image size.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming language versions or library versions.
Experiment Setup Yes First, in the first stage, for CIFAR100LT, the batch size is 64, the learning rate is 0.01, and the weight decay is 5e-3. For Image Net-LT, the batch size is 128, the learning rate is 0.01, and the weight decay is 5e-4. For i Naturalist 2018, the batch size is 512, the learning rate is 0.02, and the weight decay is 1e-4. Each of the three datasets trains 200 epochs, and the learning rate uses the cosine decay to 0. Next is the second stage. For CIFAR100-LT, the batch size is 64, the learning rate is 0.005 and the hyperparameter λ=0.15. For Image Net-LT, the batch size is 512, the learning rate is 0.01, and the hyperparameter λ=0.05. For i Naturalist 2018, the batch size is 512, the learning rate is 0.0002, and the hyperparameter λ=0.01. Each of the three datasets trains only 10 epochs in the second stage, and the learning rate uses the cosine decay to 0.