Distilling Reliable Knowledge for Instance-Dependent Partial Label Learning
Authors: Dong-Dong Wu, Deng-Bao Wang, Min-Ling Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments and analysis on multiple datasets validate the rationality and superiority of our proposed approach. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering, Southeast University, Nanjing 210096, China Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China {dongdongwu, wangdb, zhangml}@seu.edu.cn |
| Pseudocode | Yes | The pseudo-code of our complete algorithm DIRK is shown in Appendix A.3. ... Thoroughly, the pseudo-code and flowchart of DIRK-REF is shown in Appendix A.4. |
| Open Source Code | Yes | Source code is available at https://github.com/wu-dd/DIRK. |
| Open Datasets | Yes | We evaluated our method on seven commonly used benchmark image dataset: Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), Kuzushiji-MNIST (Clanuwat et al. 2018), CIFAR-10 (Krizhevsky, Hinton et al. 2009), CIFAR100 (Krizhevsky, Hinton et al. 2009), CUB-200 (Welinder et al. 2010), Flower (Nilsback and Zisserman 2008) and Oxford-IIIT Pet (Parkhi et al. 2012). |
| Dataset Splits | Yes | The hyperparameters were selected so as to maximize the accuracy on a validation set (10% of the training set). |
| Hardware Specification | Yes | Our implementation was executed using Py Torch (Paszke et al. 2019), and all experiments were conducted with NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | Our implementation was executed using Py Torch (Paszke et al. 2019). While PyTorch is mentioned, a specific version number is not provided. |
| Experiment Setup | Yes | For all methods on benchmark datasets, we used SGD as the optimizer with a momentum of 0.9, a weight decay of 1e3, an initial learning rate of 1e-2, and set the epoch number to 500. ... We set the momentum hyperparameter m as 0.99 and the trade-off parameter λ as 0 in DIRK. ... temperature hyperparameters τ1 = 0.01, τ2 = 0.07, and the sizes of both queues are fixed to be 1024. For large-scale datasets CUB-200, Flower, and Oxford-IIIT Pet, we set the mini-batch size as 32, while 256 for other datasets. |