Decompositional Generation Process for Instance-Dependent Partial Label Learning

Authors: Congyu Qiao, Ning Xu, Xin Geng

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on manually corrupted benchmark datasets and real-world datasets validate the effectiveness of the proposed method.
Researcher Affiliation Academia Congyu Qiao, Ning Xu , Xin Geng School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {qiaocy, xning, xgeng}@seu.edu.cn
Pseudocode Yes Algorithm 1 IDGP Algorithm
Open Source Code Yes Source code is available at https://github.com/palm-ml/idgp.
Open Datasets Yes We implement IDGP with compared DNN-based algorithms on five widely used benchmark datasets in deep learning, including MNIST Le Cun et al. (1998), Kuzushiji-MNIST Clanuwat et al. (2018), Fashion-MINIST Xiao et al. (2017), CIFAR-10 and CIFAR-100 Krizhevsky et al. (2009). Besides, part of the comparing algorithms are also performed on five frequently used real-world datasets, which come from different practical application domains, including Lost Cour et al. (2011), Bird Song Briggs et al. (2012), MSRCv2 Liu & Dietterich (2012), Soccer Player Zeng et al. (2013) and Yahoo!News Guillaumin et al. (2010).
Dataset Splits Yes For benchmark datasets, we split 10% samples from the training datasets for validating. For each real-world dataset, we run the methods with 80%/10%/10% train/validation/test split.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU specifications, or memory amounts used for running its experiments. It mentions using "deep neural networks" but no hardware specifics.
Software Dependencies No The paper mentions "stochastic gradient descent (SGD)" as an optimizer but does not specify version numbers for any software libraries (e.g., PyTorch, TensorFlow) or programming languages (e.g., Python version) used for implementation.
Experiment Setup Yes The optimizer is stochastic gradient descent (SGD) Robbins & Monro (1951) with momentum 0.9 and batch size 256. The details of data augmentation strategy are shown in Appendix A.8. Besides, the learning rate is selected from {10 4, 10 3, 10 2}, and the weight decay is selected from {10 5, 10 4, 10 3, 10 2} according to the performance on the validation.