Label Distribution Learning with Label Correlations via Low-Rank Approximation

Authors: Tingting Ren, Xiuyi Jia, Weiwei Li, Shu Zhao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on real-world data sets show that the proposed algorithm outperforms state-of-the-art LDL methods.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing University of Posts and Telecommunications, China 3State Key Laboratory for Novel Software Technology, Nanjing University, China 4College of Astronautics, Nanjing University of Aeronautics and Astronautics, China 5School of Computer Science and Technology, Anhui University, China
Pseudocode Yes Algorithm 1: The LDL-LCLR Framework
Open Source Code No The paper states, 'All the codes are shared by original authors,' which refers to the baselines and not necessarily the code for the proposed LDL-LCLR algorithm. No specific link or explicit statement about releasing their own code is provided.
Open Datasets Yes The proposed method on 15 real-world data sets [Geng, 2016] cover the fields of natural scene recognition, biological information classification and emotional analysis, among others.
Dataset Splits Yes On each data set, ten times tenfold cross-validation is conducted and the mean value and standard deviation of each evaluation criterion is recorded.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions algorithms and methods like 'L-BFGS' but does not specify any software libraries or dependencies with version numbers.
Experiment Setup Yes In LDL-LCLR, the parameters λ1, λ2, λ3, λ4 and k are set to 0.0001, 0.001, 0.001, 0.001 and 4, respectively. Besides, ρ is simply set as 1. We use k-means to cluster samples. The maximum iteration is set to be 100. Besides, S and Z are initialized by the identity matrix. The initialization of other variables is all-zero.