Label Distribution Learning by Exploiting Label Correlations

Authors: Xiuyi Jia, Weiwei Li, Junyu Liu, Yu Zhang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on eight real label distributed data sets demonstrate that the proposed algorithm performs remarkably better than both the state-of-the-art LDL methods and multi-label learning methods.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing, China 3School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
Pseudocode Yes Algorithm 1: L-BFGS based LDLLC
Open Source Code No The paper does not provide concrete access to source code or explicitly state its release for the methodology described.
Open Datasets Yes The datasets used in the experiments were collected from five biological experiments on the budding yeast Saccharomyces cerevisiae. There are 2465 yeast genes in total, each of which is represented by an associated phylogenetic profile vector of length 24. ... The details of the eight datasets are summarized in Table 1.
Dataset Splits No The paper states 'For each data set, we randomly partition the data into the training (80%) and test (20%) sets for classifier calibration and performance evaluation' but does not explicitly define a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, or memory specifications used for running experiments.
Software Dependencies No The paper mentions 'PT-SVM is implemented as the C-SVC type in LIBSVM' but does not provide a specific version number for LIBSVM or any other ancillary software dependencies.
Experiment Setup Yes For LDLLC, the parameters are set to: λ1 = 0.1 and λ2 = 0.01. ... For BP-MLL, the number of hidden neurons in neural networks is 20% of the number of features. The learning rate α is set to 0.05 and the maximum times of iterations for training are 100.