Support Regularized Sparse Coding and Its Fast Encoder

Authors: Yingzhen Yang, Jiahui Yu, Pushmeet Kohli, Jianchao Yang, Thomas S. Huang

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate the effectiveness of SRSC and Deep-SRSC. In this subsection, the superiority of SRSC is demonstrated by its performance in data clustering on various data sets, e.g. USPS handwritten digits data set, COIL-20, COIL-100 and UCI Gesture Phase Segmentation data set.
Researcher Affiliation Collaboration 1 Snap Research superyyzg@gmail.com,jianchao.yang@snapchat.com 2 Beckman Institute, University of Illinois at Urbana-Champaign {jyu79,t-huang1}@illinois.edu 3 Microsoft Research pkohli@microsoft.com
Pseudocode Yes Algorithm 1 Support Regularized Sparse Coding
Open Source Code Yes SRSC is implemented by both MATLAB and CUDA C++ with extreme efficiency, and the code is published on Git Hub: https://github.com/yingzhenyang/SRSC.
Open Datasets Yes USPS handwritten digits data set is comprised of n = 9298 handwritten images of ten digits from 0 to 9, and each image is of size 16 16 and represented by a 256-dimensional vector. The whole data set is divided into training set of 7291 images and test set of 2007 images. Two more data sets are used in this subsection, i.e. MNIST for hand-written digit recognition and CIFAR-10 for image recognition. MNIST is comprised of 60000 training images and 10000 test images of ten digits from 0 to 9, and each image is of size 28 28 and represented as a 784-dimensional vector. CIFAR-10 consists of 50000 training images and 10000 testing images in 10 classes, and each image is a color one of size 32 32.
Dataset Splits No The paper explicitly states train and test splits for datasets like USPS, MNIST, and CIFAR-10 (e.g., 'The whole data set is divided into training set of 7291 images and test set of 2007 images.' for USPS), but it does not specify a separate validation dataset split with quantitative details needed for reproduction.
Hardware Specification No The paper mentions that 'SRSC is implemented by both MATLAB and CUDA C++ with extreme efficiency' and 'Deep-SRSC is implemented with Tensor Flow (Abadi et al., 2016)', but it does not provide any specific details about the hardware, such as exact GPU or CPU models, memory, or cloud instance types, used for running the experiments.
Software Dependencies No The paper states that 'SRSC is implemented by both MATLAB and CUDA C++' and 'Deep-SRSC is implemented with Tensor Flow (Abadi et al., 2016)', but it does not provide specific version numbers for MATLAB, CUDA C++, or TensorFlow, nor for any other key software components.
Experiment Setup Yes Throughout all the experiments, we set K = 3 for building the adjacency matrix A of KNN graph, dictionary size p = 300 and λ = 0.1 for both ℓ2-RSC and SRSC. We also set γ(ℓ2) = 1 which is the suggested default value in (Zheng et al., 2011), and M = Mz = 5 and Mp = 50 in Algorithm 1. The default value of the weight for support regularization term of SRSC is γ = 0.5. The initial learning rate is set to 10 4, and divided by 10 at 100-th epoch and 200-th epoch, so the final learning rate is 10 6 upon the termination of the training.