Label Enhancement for Label Distribution Learning

Authors: Ning Xu, An Tao, Xin Geng

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on one artificial dataset and fourteen real-world datasets show clear advantages of GLLE over several existing LE algorithms.
Researcher Affiliation Academia 1MOE Key Laboratory of Computer Network and Information Integration, China 2School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 3School of Information Science and Engineering, Southeast University, Nanjing 210096, China
Pseudocode No The paper describes the algorithms using mathematical formulations and textual explanations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an unambiguous statement or link regarding the release of the source code for the described methodology.
Open Datasets Yes The second to the fourteen datasets are real-world LDL datasets [Geng, 2016] collected from biological experiments on the yeast genes, facial expression images, natural scene images and movies, respectively. http://cse.seu.edu.cn/Personal Page/xgeng/LDL/index.htm
Dataset Splits Yes Ten-fold cross validation is conducted for each algorithm.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models or memory specifications used for running the experiments.
Software Dependencies No The paper mentions software components like 'BFGS' and 'SA-BFGS' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For GLLE, the parameter λ is chosen among {10 2, 10 1, ..., 100} and the number of neighbors K is set to c + 1. The kernel function in GLLE is Gaussian kernel. The parameter α in LP is set to 0.5. The number of neighbors K for ML is set to c + 1. The parameter β in FCM is set to 2. The kernel function in KM is Gaussian kernel.