AdaBoost.C2: Boosting Classifiers Chains for Multi-Label Classification
Authors: Jiaxuan Li, Xiaoyan Zhu, Jiayin Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of the proposed method. Experiments Experimental Setting Datasets. In the experiments, 26 datasets from 7 different domains are utilized. Comparison Results We compared the proposed Ada Boost.C2 with the 4 baseline methods on 26 benchmark datasets. For each dataset, we conduct 10 fold validation test for 5 times and calculate the average values. The comparison in terms of 4 well-known evaluation metrics in MLC (Hamming Loss, Coverage, Ranking Loss, Average Precision) is reported in Tab. 4. Efficiency Analysis To verify the model efficiency, we compare the running time of Ada Boost.C2 with 2 boosting-based baselines, Ada Boost.MH and BOOMER. Ablation Study In this paper, multi-path Ada Boost framework and base CC classifiers are combined to construct Ada Boost.C2. Specifically, the multi-path Ada Boost framework is taken to learn the difference between labels, while CC classifiers are employed to exploit the correlation between labels. To verify their contributions, an ablation study is conducted by comparing Ada Boost.C2 with 2 baselines |
| Researcher Affiliation | Academia | Jiaxuan Li, Xiaoyan Zhu *, Jiayin Wang School of Computer Science and Technology, Xi an Jiaotong University, Xi an, China lijiaxuan@stu.xjtu.edu.cn, zhu.xy@xjtu.edu.cn, wangjiayin@mail.xjtu.edu.cn |
| Pseudocode | No | The paper describes the algorithms and processes mathematically and in text, but it does not include a distinct 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | All codes are openly available 2 https://github.com/JiaxuanGood/AdaBoostC2.git |
| Open Datasets | Yes | In the experiments, 26 datasets from 7 different domains are utilized. All these datasets are downloaded from the website of KDIS 1. http://www.uco.es/kdis/mllresources/ |
| Dataset Splits | Yes | For each dataset, we conduct 10 fold validation test for 5 times and calculate the average values. |
| Hardware Specification | Yes | The runtime comparison results are shown in Fig. 4, and all the methods are executed on Intel(R) Core(TM) i5-9500 CPU @ 3.00GHz. |
| Software Dependencies | No | For basic learner, if it is not specified, RBF kerneled SVM with defaults recommended in scikit-learn is used. The paper mentions 'scikit-learn' but does not provide a specific version number for it or any other software dependency. |
| Experiment Setup | Yes | For Ada Boost.C2, we set T = 10 as its upper limit of iteration rounds and δ = 0.01 as the lower limit of iteration error. For comparison methods, we take the recommended hyper-parameters in corresponding literatures (e.g. k = 10 for k NN in DECC). |