Fusion Label Enhancement for Multi-Label Learning
Authors: Xingyu Zhao, Yuexuan An, Ning Xu, Xin Geng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple benchmark datasets validate the effectiveness of the proposed approach. In this section, the efficiency and the performance of FLEM are evaluated in multiple MLL datasets. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 211189, China 2Key Laboratory of Computer Network and Information Integration (Ministry of Education), Southeast University, Nanjing 211189, China |
| Pseudocode | Yes | Algorithm 1 FLEM algorithm |
| Open Source Code | Yes | Our code is available at: https://github.com/ailearn-ml/FLEM. |
| Open Datasets | Yes | we conduct experiments on several real-world datasets, including AAPD [Yang et al., 2018], Reuters [Debole and Sebastiani, 2005], VOC07, VOC12 [Everingham et al., 2015], COCO14, COCO17 [Lin et al., 2014], CUB [Wah et al., 2011] and NUS [Chua et al., 2009]. |
| Dataset Splits | Yes | Following common practices [Liu et al., 2017; Lanchantin et al., 2021], these datasets are split into training set, validation set and testing set. Statistics of these real-world datasets are given in Table 1. Ntrain, Nval, Ntest, D, L denote the number of training samples, validation samples, testing samples, feature dimensions and labels respectively. |
| Hardware Specification | Yes | All the computations are performed on a GPU server with NVIDIA Tesla V100, Intel Xeon Gold 6240 CPU 2.60 GHz processor and 32 GB GPU memory. |
| Software Dependencies | No | The paper mentions 'PyTorch' but does not provide a specific version number. No other software dependencies with version numbers are listed. |
| Experiment Setup | Yes | The optimization process spans over 30 epochs using the AMSGrad variant [Reddi et al., 2018] of AdamW [Loshchilov and Hutter, 2017] with a weight decay of 0.0001. The learning rate is set to 0.001 for all algorithms. For FLEM, hyperparameters α and β are are selected by grid search from the set {0.0001, 0.001, 0.01, 0.1}. |