Classification with Label Distribution Learning

Authors: Jing Wang, Xin Geng

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we compare LDL4C with existing LDL algorithms on 17 real-word datasets, and experimental results demonstrate the effectiveness of LDL4C in classification. 5 Experiments 5.1 Experimental Configuration Real-word datasets. The experiments are extensively conducted on seventeen datasets totally...
Researcher Affiliation Academia Jing Wang and Xin Geng MOE Key Laboratory of Computer Network and Information Integration School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {wangjing91, xgeng}@seu.edu.cn
Pseudocode No The paper describes the proposed method and its optimization process (Section 3.5) using mathematical equations and descriptions, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about making the source code for its methodology publicly available, nor does it include a link to a code repository.
Open Datasets Yes The experiments are extensively conducted on seventeen datasets totally, among which fifteen are from [Geng, 2016], and M2B is from [Nguyen et al., 2012], and SCUT-FBP is from [Xie et al., 2015].
Dataset Splits Yes Finally, all algorithms are examined on 17 datasets with 10-fold cross validation, and average performance is reported.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or specific frameworks with their versions) that would be needed to replicate the experiment environment.
Experiment Setup Yes For LDL4C, the balance parameter C1 and C2 are selected from {0.001, 0.01, 0.1, 1, 10, 100} and ρ is chosen from {0.001, 0.01, 0.1} by cross validation. Moreover for AABP, the number of hidden-layer neurons is set to 64, and for AA-k NN, the number of nearest neighbors k is selected from {3, 5, 7, 9, 11}.