Hierarchical Classification Based on Label Distribution Learning

Authors: Changdong Xu, Xin Geng5533-5540

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on several hierarchical classification datasets show that our method significantly outperforms other state-of-the-art hierarchical classification approaches. We conduct the experiments on several hierarchical classification datasets, which demonstrate the effectiveness of our proposed method.
Researcher Affiliation Academia Changdong Xu, Xin Geng MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {changdongxu, xgeng}@seu.edu.cn
Pseudocode Yes Algorithm 1 Training; Algorithm 2 Prediction
Open Source Code No The paper does not contain any statement about releasing source code for the described methodology, nor does it provide links to a code repository.
Open Datasets Yes We conduct our experiments on several hierarchical classification datasets...CLEF (Dimitrovski et al. 2011) is an image dataset...IPC is a document dataset which is a collection of patents arranged with the International Patent Classification Hierarchy. 1http://www.wipo.int/classifications/ipc/en/support/; LSHTC-small, DMOZ-2010, and DMOZ-2012 2 (Partalas et al. 2015) are a number of document datasets released from the LSHTC (Large-Scale Hierarchical Text Classification) challenges 2010 and 2012. 2http://lshtc.iit.demokritos.gr/
Dataset Splits No The paper mentions using 'cross-validation to select the optimal parameters on the datasets' but does not specify a distinct validation set with explicit split percentages or counts for the primary dataset partitioning (e.g., 80/10/10 split).
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper mentions methods like 'L-BFGS' and 'logistic regression' but does not specify any software libraries or their version numbers (e.g., Python 3.8, PyTorch 1.9, scikit-learn 0.24).
Experiment Setup No The paper mentions general settings like 'penalty parameters... are chosen with a range from 10 3 to 103' and 'α is decided in a range from 0 to 1', but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations for the final models.