Discrete Binary Coding based Label Distribution Learning

Authors: Ke Wang, Xin Geng

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on five real-world datasets demonstrate its superior performance over several state-of-the-art LDL methods with the lower time cost.
Researcher Affiliation Academia Ke Wang and Xin Geng MOE Key Laboratory of Computer Network and Information Integration, China School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {k.wang, xgeng}@seu.edu.cn
Pseudocode Yes Algorithm 1 DBC-LDL
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology or provide a link to a code repository.
Open Datasets Yes We conduct our experiments on five real-world datasets, namely M2B (Multi-Modality Beauty) [Nguyen et al., 2012], s-BU 3DFE (scores-Binghamton University 3D Facial Expression) [Zhou et al., 2015], Twitter LDL [Yang et al., 2017], Flickr LDL [Yang et al., 2017], and Ren-CECps [Quan and Ren, 2010], to evaluate our algorithm in terms of both accuracy and efficiency.
Dataset Splits Yes All the results are averaged over 10-fold cross validation in terms of both accuracy and time cost.
Hardware Specification Yes All the experiments are implemented using Matlab on a standard PC with a 2.30GHz Intel CPU and 12GB memory.
Software Dependencies No The paper mentions "implemented using Matlab" but does not provide a specific version number for Matlab or any other software dependencies with version numbers.
Experiment Setup Yes For the proposed DBC-LDL, we empirically set the parameter α = 104, β = 104 and γ = 10 2 . The code length in DBC-LDL and BC-LDL is same (i.e., 128 bits) for making a fair comparison, and k in DBC-LDL, BC-LDL and AA-k NN is chosen from {10, 20, , 50}.