DUEL: Duplicate Elimination on Active Memory for Self-Supervised Class-Imbalanced Learning

Authors: Won-Seok Choi, Hyundo Lee, Dong-Sig Han, Junseok Park, Heeyeon Koo, Byoung-Tak Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of the DUEL framework in class-imbalanced environments, demonstrating its robustness and providing reliable results in downstream tasks. We also analyze the role of the DUEL policy in the training process through various metrics and visualizations. Experiments: The goal of our framework is to learn a robust representation given unrefined and instantaneous data sampled from a class-imbalanced distribution. Thus, we validate our framework in class-imbalanced environments. Table 1: Linear probing accuracies with various settings.
Researcher Affiliation Academia Won-Seok Choi1, Hyundo Lee1, Dong-Sig Han1, Junseok Park1, Heeyeon Koo2, Byoung-Tak Zhang1,3,* 1Seoul National University 2Yonsei University 3AI Institute of Seoul National University (AIIS)
Pseudocode Yes Algorithm 1: DUEL Framework with the policy πDUEL Model : feature extractor fθ, memory M Input : empirical data distribution D , batch size B, memory size K, learning rate η Output : trained feature extractor fθ
Open Source Code No No explicit statement about providing open-source code or a link to a code repository was found in the paper.
Open Datasets Yes We utilize CIFAR-10 (Krizhevsky, Hinton et al. 2009) and STL-10 (Coates, Ng, and Lee 2011) for experiments. We also use Image Net-LT (Liu et al. 2019), which has a longtailed class distribution, to validate our framework in a more realistic environment.
Dataset Splits No The paper mentions evaluating models with 'linear probing with class-balanced datasets' after training, but it does not explicitly specify the use of a distinct 'validation' dataset split or its size/proportion. It does not mention train/validation/test splits explicitly for its own experiments.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, or cloud instance specifications) used for running the experiments were provided in the paper.
Software Dependencies No No specific version numbers for software dependencies (e.g., Python, PyTorch, TensorFlow, etc.) were mentioned in the paper.
Experiment Setup No Hyperparameters for all models are unified for fair comparison. More details for hyperparameters and the experiments are provided in the Appendix E. Since the details are stated to be in Appendix E and not in the main text provided, this question is answered as 'No'.