Enhancing Multi-Label Classification via Dynamic Label-Order Learning
Authors: Jiangnan Li, Yice Zhang, Shiwei Chen, Ruifeng Xu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on public datasets reveal that our approach greatly outperforms previous methods. |
| Researcher Affiliation | Academia | 1Harbin Institute of Technology, Shenzhen, China 2Peng Cheng Laboratory, Shenzhen, China 3Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies |
| Pseudocode | Yes | Algorithm 1: The proposed label-order learning algorithm |
| Open Source Code | Yes | We will release our code at https: //github.com/Kagami Baka/DLOL. |
| Open Datasets | Yes | The dataset used is Reuters-21578 (Hayes and Weinstein 1990). and we evaluate our approach on four typical multilabel classification datasets, namely Reuters-21578, RCV1-V2, Slashdot, and Go Emotions. |
| Dataset Splits | Yes | Table 3: Statistics of four datasets. (includes columns for Train, Dev, Test sample counts) |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using BART-base as the backbone model but does not specify software dependencies with version numbers (e.g., Python, PyTorch, or CUDA versions). |
| Experiment Setup | Yes | Our approach has three hyper-parameters for multi-reference training, label smoothing, and eos penalty... the optimal values for these hyper-parameters are α = 0.1, β = 0.1, and γ = 0.9, respectively. |