Candidate Label Set Pruning: A Data-centric Perspective for Deep Partial-label Learning

Authors: Shuo He, Chaojie Wang, Guowu Yang, Lei Feng

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, extensive experiments on both benchmark-simulated and real-world PLL datasets validate the great value of CLSP to significantly improve many state-of-the-art deep PLL methods.
Researcher Affiliation Academia 1University of Electronic Science and Technology of China 2Nanyang Technological University
Pseudocode Yes A THE PSEUDO-CODE OF THE PROPOSED ALGORITHM Algorithm 1: The proposed CLSP method
Open Source Code No The paper mentions using open-source libraries like LAVIS and Faiss, and links to SimCLR's original configurations, but does not provide a link or explicit statement about the open-sourcing of the code for the CLSP method described in the paper.
Open Datasets Yes We use three benchmark datasets, i.e., CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), Tiny-Image Net (Wu et al., 2017), and a real-world PLL dataset PASCAL VOC (Everingham et al., 2015).
Dataset Splits No The paper mentions using a 'validation set' in the context of theoretical analysis to estimate parameters, but does not provide specific details on its size or how it's split for experimental reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models or memory.
Software Dependencies No The paper mentions using libraries like Faiss and LAVIS but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes On the whole, we employ a base training scheme: a Res Net-18 model, learning rate is 1e-2, and weight decay is 1e-3. On CIFAR-10 and CIFAR-100, CC, PRODEN, LWS, and CAVL do not employ a learning rate scheduler and the data augmentation technique which is the same as the original implementation. But, on more difficult datasets CIFAR-10-LT, CIFAR-100-LT, Tiny-Image Net, and VOC, they are equipped with a consistency regularization with augmented examples and a Cosine Annealing Learning Rate scheduler... Especially, on VOC, the epoch is set to 100 for all PLL methods to avoid overfitting.