Semi-Supervised Dictionary Learning via Structural Sparse Preserving

Authors: Di Wang, Xiaoqin Zhang, Mingyu Fan, Xiuzi Ye

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are presented to show the superior performance of our method in classification applications. In this section, We first perform handwritten digit recognition on the two widely used datasets: MNIST (Le Cun et al. 1998) and USPS (Hull 1994). And then, we apply the proposed algorithm to Face Recognition on the UMIST (Wechsler et al. 1998) face dataset. At last, we evaluate our approach on two public object datasets: SBData (Li and Allinson 2009) and COIL-20 (Nene, Nayar, and Murase 1996).
Researcher Affiliation Academia Di Wang, Xiaoqin Zhang , Mingyu Fan and Xiuzi Ye College of Mathematics & Information Science, Wenzhou University Zhejiang, China wangdi@amss.ac.cn, zhangxiaoqinnan@gmail.com, {fanmingyu, yexiuzi}@wzu.edu.cn
Pseudocode Yes Algorithm 1: Updating Sparse Codes A0 and Algorithm 2: The optimization procedure for the objective function (3).
Open Source Code No The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes In this section, We first perform handwritten digit recognition on the two widely used datasets: MNIST (Le Cun et al. 1998) and USPS (Hull 1994). And then, we apply the proposed algorithm to Face Recognition on the UMIST (Wechsler et al. 1998) face dataset. At last, we evaluate our approach on two public object datasets: SBData (Li and Allinson 2009) and COIL-20 (Nene, Nayar, and Murase 1996).
Dataset Splits Yes The parameters of all methods are obtained by using 5-fold cross validation. For each dataset X, we first rearrange the order of data samples randomly. Then, in each class of X, we randomly select τ samples as labeled samples, ν samples as unlabeled samples, and the rest are left for testing samples.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes The parameters of all methods are obtained by using 5-fold cross validation. For each dataset X, we first rearrange the order of data samples randomly. Then, in each class of X, we randomly select τ samples as labeled samples, ν samples as unlabeled samples, and the rest are left for testing samples. In the experiments, we use the whole image as the feature vector, and normalize the vector to have unit ℓ2-norm. Following the common evaluation procedure, we repeat the experiments 10 times with different random spits of the datasets to report the average classification accuracy together with standard deviation, and the best classification results are in boldface.