Latent Semantics Encoding for Label Distribution Learning

Authors: Suping Xu, Lin Shang, Furao Shen

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on 15 real-world data sets validate the effectiveness of the proposed algorithm.
Researcher Affiliation Academia State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China Department of Computer Science and Technology, Nanjing University, Nanjing 210023, China
Pseudocode Yes The pseudo codes of LSE-LDL are presented in Algorithm 1 and Algorithm 2, which correspond to the training phase and the testing phase, respectively.
Open Source Code No The paper does not provide concrete access to its own source code. It only mentions that 'All the codes of above compared algorithms are shared by original authors'.
Open Datasets Yes The 15 data sets are coming from LDL website (http://ldl.herokuapp.com/download).
Dataset Splits Yes On each data set, ten-fold cross-validation is employed for the performance evaluation, and the mean value and the standard deviation of ten experimental results are respectively recorded.
Hardware Specification Yes All the experiments were carried out on a workstation equipped with an Intel Core i7 6850K CPU (3.60 GHz) and 32.00 GB memory.
Software Dependencies Yes We implement all LDL algorithms in Matlab R2017b.
Experiment Setup Yes In LSE-LDL, to model the local geometry structures in the latent semantic feature space, σ and ρ are set to be 0.05 and 1% of training instances, respectively. The number of selected features Q m. The regularization parameters in LSE-LDL are tuned with a grid-search strategy by varying their values in the range of t0.001, 0.01, 0.1, 1.0, 10u. The maximum number of iterations is 5000, and the small positive constant ϵ 0.0001.