ABC: Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning

Authors: Hyuck Lee, Seungjae Shin, Heeyoung Kim

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results under various scenarios demonstrate that the proposed algorithm achieves state-of-the-art performance. Through qualitative analysis and an ablation study, we further investigate the contribution of each component of the proposed algorithm.
Researcher Affiliation Academia Hyuck Lee Seungjae Shin Heeyoung Kim Department of Industrial and Systems Engineering, KAIST {dlgur0921, tmdwo0910, heeyoungkim}@kaist.ac.kr
Pseudocode Yes We present the pseudo code of the proposed algorithm in Appendix B.
Open Source Code Yes The code for the proposed algorithm is available at https://github.com/Lee Hyuck/ABC.
Open Datasets Yes We created class-imbalanced versions of CIFAR-10, CIFAR-100 [21], and SVHN [25] datasets... We also conducted experiments on 7.5M data points of 256 by 256 images from the LSUN dataset [37].
Dataset Splits No The paper mentions 'validation loss plots' in Appendix K but does not explicitly provide details about training/validation/test dataset splits with percentages or counts for the datasets used in the experiments.
Hardware Specification Yes We present the floating point operations per second (FLOPS) for each algorithm using Nvidia Tesla-V100 in Appendix I.
Software Dependencies No The paper mentions using specific optimizers (Adam) and data augmentation techniques (Cutout, Random Augment) but does not provide version numbers for any software libraries or frameworks used (e.g., PyTorch, TensorFlow, scikit-learn versions).
Experiment Setup Yes We trained the proposed algorithm for 250,000 iterations with a batch size of 64. The confidence threshold τ was set to 0.95 based on experiments with various values of τ in Appendix D. We used the Adam optimizer [20] with a learning rate of 0.002, and used Cutout [10] and Random Augment [8] for strong data augmentation, following [18].