Adaptive Semi-Supervised Learning with Discriminative Least Squares Regression
Authors: Minnan Luo, Lingling Zhang, Feiping Nie, Xiaojun Chang, Buyue Qian, Qinghua Zheng
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several benchmark datasets demonstrate the effectiveness and superiority of the proposed model for multi-class classification tasks. |
| Researcher Affiliation | Academia | 1SPKLSTN Lab, Department of Computer Science, Xi an Jiaotong University, Shaanxi, China. 2Center for OPTical Imagery Analysis and Learning, Northwestern Polytechnical University, China. 3School of Computer Science, Carnegie Mellon University, PA, USA. |
| Pseudocode | Yes | Algorithm 1 Alternative optimization for problem (2) |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | several benchmark datasets of varying image types are used... including the ORL database of faces [Cai et al., 2007b], the extended Yale B database (Yale B) of face [Georghiades et al., 2001], the face database CMU-PIE [Sim et al., 2002] and the palm print database (PALM) [Yan et al., 2007]. |
| Dataset Splits | No | The paper specifies training and testing splits, but does not explicitly describe a separate validation dataset split. |
| Hardware Specification | No | The paper discusses computational complexity and time performance, but does not specify any hardware details such as CPU, GPU, or memory used for experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | Yes | For the regularization parameter used in SDLSR, ℓ1SEMI, FME , ASL and our model, we tune them in the range of {10 3, 10 2, 10 1, 100, 101, 102, 103} and report the best results. The adaptive parameter used in ASL and our model is tuned from 1 to 2 with a step-size of 0.1. |