Correlation-Induced Label Prior for Semi-Supervised Multi-Label Learning
Authors: Biao Liu, Ning Xu, Xiangyu Fang, Xin Geng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on several benchmark datasets have validated the superiority of the proposed method. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China. Correspondence to: Ning Xu <xning@seu.edu.cn>, Xin Geng <xgeng@seu.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 The PCLP Algorithm |
| Open Source Code | Yes | Source code is available at https: //github.com/palm-biaoliu/pclp. |
| Open Datasets | Yes | We evaluate the effectiveness of the proposed method on three large-scale multi-label image classification datasets, including Pascal-VOC-2012 (VOC) (Everingham et al., 2010), MS-COCO-2014 (COCO) (Lin et al., 2014), and NUS-WIDE (NUS) (Chua et al., 2009). |
| Dataset Splits | No | The paper mentions 'training set' and 'unlabeled dataset' but does not provide specific training/test/validation dataset splits, percentages, or absolute sample counts for reproduction. |
| Hardware Specification | Yes | We implement all experiments by Py Torch on NVIDIA RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | The architecture of the pseudo-label generator S and the encoder E is a Res Net50 followed by a 4-layers MLP and SAGAN architecture (Zhang et al., 2019) is used for the discriminator D and the sample generator G. For the tradeoff parameter λ, we fix it as 1 for all datasets. We use Adam optimizer (Loshchilov & Hutter, 2019) with β = (0.9, 0.999), Rand Augment (Cubuk et al., 2020) and Cutout (De Vries & Taylor, 2017) for data augmentation for all datasets in all experiments. The batch size is 64, the learning rate is 10^-4. |