Label Distribution Learning Machine
Authors: Jing Wang, Xin Geng
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results validate the better classification performance of LDLM. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education. Correspondence to: Xin Geng <xgeng@seu.edu.cn>. |
| Pseudocode | Yes | The details of the algorithm are presented in Algorithm 1 |
| Open Source Code | No | The paper only provides links to open-source code for baseline methods (EDL-LRL and LDLFs) and not for their own proposed methodology (LDLM). |
| Open Datasets | Yes | The first 15 datasets are collected by Geng (2016), where the first ten (from Alpha to Spoem) are from the clustering analysis of genome-wide expression in Yeast Saccharomyces cerevisiae (Eisen et al., 1998), the Scene is a multi-label image dataset whose label distributions are transformed from rankings (Geng & Luo, 2014), the Gene is obtained from the research on the relation between gene and diseases (Yu et al., 2012), the Movie is collected from user ratings on movies (Geng & Hou, 2015), and the SJAFFE and SBU 3DFE are collected from JAFFE (Lyons et al., 1998) and BU 3DFE (Yin et al., 2006), respectively. The M2B (Nguyen et al., 2012) and SCUT-FBP (Xie et al., 2015) are about facial beauty perception, which are pre-processed as (Ren & Geng, 2017). |
| Dataset Splits | Yes | We tune the parameters of each method by ten-fold cross-validation. |
| Hardware Specification | Yes | Moreover, we implement LDLM in Python and carry out the experiments on a Linux server with a 2.70GHz CPU and 62GB memory. |
| Software Dependencies | No | The paper states "we implement LDLM in Python" but does not provide specific version numbers for Python or any other key software libraries or dependencies. |
| Experiment Setup | Yes | For LDLM, λ1 = 0.001, λ2 and λ3 are tuned from the candidate set {10 3, , 1}, and ρ = 0.01. We tune the parameters of each method by ten-fold cross-validation. |