Learning from Long-Tailed Noisy Data with Sample Selection and Balanced Loss

Authors: Lefan Zhang, Zhang-Hao Tian, Wujun Zhou, Wei Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmarks demonstrate that our method outperforms existing state-of-the-art methods.
Researcher Affiliation Academia Lefan Zhang , Zhang-Hao Tian , Wujun Zhou and Wei Wang National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China {zhanglf, tianzh, zhouwujun, wangw}@lamda.nju.edu.cn
Pseudocode Yes Procedure 1 Class-Aware Sample Selection (CASS) and Algorithm 1 Learning with class-aware Sample Selection and Balanced Loss (SSBL).
Open Source Code No The paper does not provide an explicit statement or link to its open-source code for the described methodology.
Open Datasets Yes Datasets. We validate our method on seven benchmark datasets, namely CIFAR-10, CIFAR-100 [Krizhevsky et al., 2009], mini-Image Net-Red [Jiang et al., 2020], Clothing1M [Xiao et al., 2015], Food-101N [Lee et al., 2018], Animal-10N [Song et al., 2019] and Web Vision [Li et al., 2017].
Dataset Splits No The paper does not explicitly provide specific details on validation dataset splits (e.g., percentages, sample counts, or citations to predefined validation splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions general software components like 'Res Net' and 'Inception-Res Net v2' as model architectures, but does not specify any programming languages, libraries, or solvers with version numbers.
Experiment Setup Yes On CIFAR-10, CIFAR-100, mini Image Net-Red and Animal-10N, we use an 18-layer Pre Act Res Net and train for 200 epochs. On Clothing1M and Food101N, we use a Res Net-50 and train for 200 epochs from scratch. On Web Vision, we use an Inception-Res Net v2 and train for 100 epochs following [Li et al., 2020]. On all datasets, γsup is set as 3 and γrel is set as 1 in L L (refer to Appendix A for more details).