Revisiting Pseudo-Label for Single-Positive Multi-Label Learning
Authors: Biao Liu, Ning Xu, Jiaqi Lv, Xin Geng
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on four image datasets and five MLL datasets show the effectiveness of our methods over several existing SPMLL approaches. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan. Correspondence to: Ning Xu <xning@seu.edu.cn>, Xin Geng <xgeng@seu.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 MIME Algorithm |
| Open Source Code | No | No explicit statement about providing open-source code or a link to a code repository was found in the paper. |
| Open Datasets | Yes | In the experiments, following (Cole et al., 2021; Xu et al., 2022), we employed four large scale multi-label image classification (MLIC) datasets and five widely-used MLL datasets (Hang & Zhang, 2022) to evaluate our proposed method. The four MLIC datasets include PSACAL VOC 2021 (VOC) (Everingham et al., 2010), MS-COCO 2014 (COCO) (Lin et al., 2014), NUS-WIDE (NUS) (Chua et al., 2009), and CUB-200 2011 (CUB) (Wah et al., 2011); the five MLL datasets cover a wide range of scenarios with heterogeneous multi-label characteristics. |
| Dataset Splits | Yes | For each MLIC dataset, we withhold 20% of the training set for validation. For each MLL dataset, we split the dataset as train/validation/test set in a ratio of 80%/10%10%. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were mentioned. |
| Software Dependencies | No | The paper mentions "We use the Adam optimizer (Kingma & Ba, 2015)" but does not provide specific version numbers for software components or libraries. |
| Experiment Setup | Yes | The batch size is selected from {8, 16} and the number of epochs is set to 10. The learning rate , weight decay, and the tradeoff parameter β are selected from {10-2, 10-3, 10-4} with a validation set. |