Semi-supervised Multi-label Learning with Balanced Binary Angular Margin Loss
Authors: Ximing Li, Silong Liang, Changchun Li, pengfei wang, Fangming Gu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the effectiveness of S2ML2-BBAM, we compare it with existing competitors on benchmark datasets. The experimental results validate that S2ML2-BBAM can achieve very competitive performance. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Jilin University, China 2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, China 3Computer Network Information Center, Chinese Academy of Sciences, China 4University of Chinese Academy of Sciences, Chinese Academy of Sciences, China |
| Pseudocode | Yes | The Algorithm 1 provides a detailed description of the training process of the model. |
| Open Source Code | No | No statement explicitly providing open-source code for the methodology or a link to a code repository is found within the paper's main content or appendices. |
| Open Datasets | Yes | We employ 5 widely used MLL datasets, including image datasets Pascal VOC-2012 (VOC) [27], MS-COCO2014 (COCO) [28] and Animals with Attributes2 (AWA) [29], text datasets Ohsumed [30] and AAPD [31]. |
| Dataset Splits | Yes | For each dataset, we randomly select π training samples as labeled ones, and the remaining as unlabeled ones. We set π {5%, 10%, 15%, 20%}, to explore the performance of our method under different data proportions. |
| Hardware Specification | No | No specific hardware specifications (e.g., GPU/CPU models, memory details) are provided within the paper's main content or appendices. |
| Software Dependencies | No | We employ 5 evaluation metrics, including Micro-F1, Macro-F1, mean average precision (m AP), Hamming Loss and One Loss [1], and compute them with the Scikit-Learn tool. |
| Experiment Setup | Yes | Implementation details. We use the pre-trained Res Net-50 [35] as the backbone for image datasets and BERT-base-uncased model [36] for text datasets. We set the decay of EMA as 0.9997. The batch size is 32 for VOC, 128 for AWA and 64 for COCO, Ohsumed and AAPD. The warm-up epoch T0 is 12. The s and m are 20 and 0.4 in VOC, 20 and 0.3 in COCO, 10 and 0.2 in AWA, Ohsumed and AAPD. The parameters for negative sampling η are set to 5. |