Instance-Dependent Partial Label Learning

Authors: Ning Xu, Congyu Qiao, Xin Geng, Min-Ling Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on benchmark and real-world datasets validate the effectiveness of the proposed method.
Researcher Affiliation Academia Ning Xu, Congyu Qiao, Xin Geng , and Min-Ling Zhang School of Computer Science and Engineering, Southeast University, Nanjing 210096, China MOE Key Laboratory of Computer Network and Information Integration, Ministry of Education, China {xning, qiaocy, xgeng, zhangml}@seu.edu.cn
Pseudocode Yes Algorithm 1 VALEN Algorithm
Open Source Code Yes Source code is available at https://github.com/palm-ml/valen.
Open Datasets Yes We adopt four widely used benchmark datasets including MNIST [22], Fashion-MNIST [32], Kuzushiji-MNIST [6], and CIFAR-10 [21], and five datasets from the UCI Machine Learning Repository [1], including Yeast, Texture, Dermatology, Synthetic Control, and 20Newgroups. [...] The datasets corrupted by the instance-dependent generating procedure are available at https://drive.google.com/drive/folders/1J_68EqOrLN6tA56RcyTgcr1komJB31Y1?usp=sharing.
Dataset Splits Yes We run 5 trials on the four benchmark datasets and perform five-fold cross-validation on UCI datasets and real-world PLL datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No We implement the comparing methods with Py Torch. The paper mentions PyTorch but does not specify a version number or other software dependencies with versions.
Experiment Setup Yes Specifically, the 32-layer Res Net is trained on CIFAR-10 in which the learning rate, weight decay and mini-batch size are set to 0.05, 10-3 and 256, respectively. The three-layer MLP is trained on MNIST, Fashion-MNIST and Kuzushiji-MNIST where the learning rate, weight decay and mini-batch size are set to 10-2, 10-4 and 256, respectively. The linear model is trained on UCI and real-world PLL datasets where the learning rate, weight decay and mini-batch size are set to 10-2, 10-4 and 100, respectively. The number of epochs is set to 500, in which the first 10 epochs are warm-up training.