Semi-Supervised Adaptive Label Distribution Learning for Facial Age Estimation

Authors: Peng Hou, Xin Geng, Zeng-Wei Huo, Jia-Qi Lv

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that SALDL performs remarkably better than state-of-the-art algorithms when there are only limited accurately labeled data available.
Researcher Affiliation Academia Peng Hou, Xin Geng, Zeng-Wei Huo, Jia-Qi Lv MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {hpeng, xgeng, huozw, lvjiaqi}@seu.edu.cn
Pseudocode Yes Algorithm 1 SALDL Input: The initial standard deviation σ0; The number of nearest neighbors K; The balance parameter C; The maximum number of iterations T; The labeled image set Sl = {(x1, μ1), . . . , (xl, μl)}; The unlabeled image set Su = {xl+1, . . . , xl+u}. Output: p(y|x; Θ).
Open Source Code No The paper does not provide any statement or link regarding the availability of its source code.
Open Datasets Yes The dataset used in the experiments is MORPH (Ricanek Jr and Tesafaye 2006).
Dataset Splits Yes The test images are randomly selected firstly and fixed in all experiments. The number of the test images is 5,000. For the semi-supervised methods, i.e., LP, SLDL and SALDL, the number of training images they use in all experiments is always 50,000. All parameters are determined through the 10-fold cross validation process.
Hardware Specification No The paper does not provide specific details about the hardware used for its experiments.
Software Dependencies No The paper mentions various algorithms and methods used (e.g., BIF, MFA, KPLS, OHRank, LDL, ALDL, LP, BFGS) but does not specify any software names with version numbers for implementation dependencies (e.g., Python, PyTorch, TensorFlow, scikit-learn).
Experiment Setup Yes For SALDL, the initial standard deviation σ0 is set to 3, the number of nearest neighbors K is set to 10, the balance weight C is set to 0.001. The maximum number of iterations T decreases with the increase of the number of the labeled training images.