One Positive Label is Sufficient: Single-Positive Multi-Label Learning with Label Enhancement
Authors: Ning Xu, Congyu Qiao, Jiaqi Lv, Xin Geng, Min-Ling Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on twelve corrupted MLL datasets show the effectiveness of SMILE over several existing SPMLL approaches. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2 RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan {xning, qiaocy}@seu.edu.cn, is.jiaqi.lv@gmail.com, {xgeng, zhangml}@seu.edu.cn |
| Pseudocode | Yes | Algorithm 1 SMILE Algorithm |
| Open Source Code | Yes | Source code is available at https://github.com/palm-ml/smile. |
| Open Datasets | Yes | In the experiments, we adopt twelve widely-used MLL datasets [13], which cover a broad range of cases with diversiļ¬ed multi-label properties. |
| Dataset Splits | Yes | For each dataset, we run the comparing methods with 80%/10%/10% train/validation/test split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computational resources) used for running experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer [17]' but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | The mini-batch size and the number of epochs are set to 16 and 25, respectively. The learning rate and weight decay are selected from {10 4, 10 3, 10 2} with a validation set. |