End-to-End Probabilistic Label-Specific Feature Learning for Multi-Label Classification
Authors: Jun-Yi Hang, Min-Ling Zhang, Yanghe Feng, Xiaocheng Song6847-6855
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on 14 benchmark data sets show that our approach outperforms the state-of-the-art counterparts. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China 3College of Systems Engineering, National University of Defense Technology, Changsha 410073, China 4Department of Beijing Institute of Electronic Engineering, Beijing 100854, China |
| Pseudocode | No | The paper describes the proposed approach using text and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code package is publicly available at: http://palm.seu.edu.cn/zhangml/files/PACA.rar |
| Open Datasets | Yes | Table 1: Characteristics of the experimental data sets. ... 1http://mulan.sourceforge.net/datasets.html 2http://palm.seu.edu.cn/zhangml/ 3http://lear.inrialpes.fr/people/guillaumin/data.php |
| Dataset Splits | Yes | We employ ten-fold cross validation to evaluate above approaches on the 14 data sets. |
| Hardware Specification | No | The paper states: 'We thank the Big Data Center of Southeast University for providing the facility support on the numerical calculations in this paper.' but does not specify any hardware details like GPU/CPU models or memory. |
| Software Dependencies | No | The paper mentions using 'Adam' for network optimization but does not provide specific software names with version numbers for libraries or frameworks (e.g., TensorFlow, PyTorch, scikit-learn). |
| Experiment Setup | Yes | We employ a fully-connected neural network with hidden dimensionality [256]...hidden dimensionalities of the encoder and the decoder are both set to [256, 512, 256]...Adam with a batch size of 128, weight decay of 10 5, momentums of 0.999 and 0.9 is employed. |