Incomplete Label Distribution Learning

Authors: Miao Xu, Zhi-Hua Zhou

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments validate the effectiveness of our proposal.
Researcher Affiliation Academia Miao Xu and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Nanjing 210023, China {xum,zhouzh}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 Incom LDL-prox
Open Source Code No All the codes are shared by original authors, and we use the default parameter suggested there, except that we tune the regularization parameter for the PTSVM algorithm using 10-folder cross-validation in the same way as in Incom LDL-prox.
Open Datasets Yes We evaluate the proposed algorithms for Incom LDL problem on 15 real data sets. ... Details of them can be found in [Geng, 2016]. Here we summarize their statistics in Table 1.
Dataset Splits Yes In Incom LDL-prox, the regularization parameter is selected from 2{ 10, 9,...,9,10} by cross-validation on training data. ... The value is measured by 10-folder cv shown in mean std form.
Hardware Specification Yes All the results were obtained on a Linux server with CPU 2.53GHz and 48GB memory.
Software Dependencies No We implement our approaches in Matlab.
Experiment Setup Yes In Incom LDL-prox, the regularization parameter is selected from 2{ 10, 9,...,9,10} by cross-validation on training data. Parameters γ and ϵ are set to be 2 and 10 5 respectively. The maximum iteration is set to be 100. In Incom LDL-admm, regularization parameter λ and number of maximum iteration are selected in the same way as Incom LDL-prox. ρ1 is simply set as 1 and all the variables are initialized to be all-zero. The stopping criterion parameters ϵabs and ϵrel are set as 10 4 and 10 2 as suggested in the survey [Boyd et al., 2011].