Label Embedding Based on Multi-Scale Locality Preservation
Authors: Cheng-Lun Peng, An Tao, Xin Geng
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the effectiveness of MSLP in preserving the locality structure of label distributions in the embedding space and show its superiority over the state-of-the-art baseline methods. |
| Researcher Affiliation | Academia | 1 MOE Key Laboratory of Computer Network and Information Integration, China 2 School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 3 School of Information Science and Engineering, Southeast University, Nanjing 210096, China {chenglunpeng, taoan, xgeng}@seu.edu.cn |
| Pseudocode | Yes | Algorithm 1 MSLP |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | The LDL dataset Natural Scene (NS) [2014] is utilized. Two widely used facial expression datasets s-JAFFE and s-BU 3DFE are used here, for they have been extended to the standard LDL datasets by Zhou et al. [2015]. We also collect two facial beauty datasets SCUT-FBP and Multi-Modality Beauty (M2B) with the information of label distributions [Ren and Geng, ]. |
| Dataset Splits | Yes | For all datasets and algorithms, the 10-fold cross validation is conducted and the average performance is recorded. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | For MSLP, the squared Euclidean distance is chosen as dis(), and the σ is set to the average of squared Euclidean distances among all pairs whose W + y,ij = 0. k+, α is chosen from {5, 10} and k = k+ simply, β is selected from {0, 0.1, 0.5}, λ is chosen from {0, 0.01, 0.1}, and k for decoding is chosen from {5, 10, 15}. The number of hidden-layer neurons for CPNN and AA-BP is set to 50, and k for AA-KNN is selected from {5, 10, 15}. BFGS-LDL and IIS-LDL follow the advised settings in [Geng, 2016]. The rbf kernel is used for LDSVR and PT-SVM with its width set to the average Euclidean distance among training instances. Also, aiming to show the superiority of MSLP versus FE, four typical FE methods CCA [Hardoon et al., 2003], LPP [He and Niyogi, 2004], NPE [He et al., 2005] and PCA [Jolliffe, 1986] are compared, and AA-KNN is used as their subsequent predictor. The number of neighbors for LPP and NPE is selected from {5, 10, 15} and the width of heat kernel for LPP is also set to the average squared Euclidean distances among its neighbor pairs. Moreover, the compared FE methods are allowed to be extended to their kernel version with the rbf kernel, which gives them full chances to beat MSLP. To be fair, the ratio u of the embedding dimensionality over the original feature dimensionality for MSLP and FE methods ranges over {10%, 20%, ..., 100%}, and the best one is adopted. |