Learn the Highest Label and Rest Label Description Degrees

Authors: Jing Wang, Xin Geng

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical analysis shows the generalization of LDL-HR. Besides, the experimental results on 18 real-world datasets validate the statistical superiority of our method.
Researcher Affiliation Academia Jing Wang and Xin Geng MOE Key Laboratory of Computer Network and Information Integration School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {wangjing91, xgeng}@seu.edu.cn
Pseudocode No The paper describes the optimization process and gradient calculation but does not include a formally labeled "Algorithm" or "Pseudocode" block.
Open Source Code Yes Available at: https://github.com/wangjing4research/LDL HR
Open Datasets Yes Table 1 summarizes the statistics of the experimental datasets. The first 15 datasets (from Alpha to SBU 3DFE) are collected by Geng [2016]. The last three datasets M2B [Nguyen et al., 2012], SCUT-FBP [Xie et al., 2015], and fbo5500 [Liang et al., 2018] are about facial beauty perception.
Dataset Splits Yes We first tune the parameters of each method by 10-fold crossvalidation, and then run each method with the best parameters for 10 times random data partitions (90% for training and 10% for testing).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions applying a quasi-Newton algorithm L-BFGS, but it does not specify any software libraries or dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For LDL-HR, λ1 = 0.001, λ2 and λ3 are tuned from the candidate set {10 3, , 1}, and ρ = 0.01. We first tune the parameters of each method by 10-fold crossvalidation, and then run each method with the best parameters for 10 times random data partitions (90% for training and 10% for testing).