Supervised Transfer Sparse Coding

Authors: Maruan Al-Shedivat, Jim Jing-Yan Wang, Majed Alzahrani, Jianhua Huang, Xin Gao

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.
Researcher Affiliation Academia 1Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal, Jeddah 23955, Saudi Arabia 2University at Buffalo, The State of New York, Buffalo, NY 14203, United States 3Department of Statistics, Texas A&M University, College Station, TX 77843, United States
Pseudocode Yes Algorithm 1 STSC: Supervised Transfer Sparse Coding
Open Source Code No The paper acknowledges
Open Datasets Yes USPS1 comprises 9,298 images of hand-written Arabic digits of size 16 16 pixels. MNIST2 contains 70,000 images of hand-written Arabic digits. ... MADBase3 is a less known dataset of 70,000 images of hand-written Hindi digits. ... Amazon is the part of the Office (Gong et al. 2012) dataset that has images downloaded from online merchants. Caltech-256 is a standard database of object images of 256 categories (Griffin, Holub, and Perona 2007). ... 1http://www-i6.informatik.rwth-aachen.de/ keysers/usps.html 2http://yann.lecun.com/exdb/mnist 3http://datacenter.aucegypt.edu/shazeem/
Dataset Splits No The paper specifies training and testing sets with sample counts but does not explicitly describe a separate validation split or cross-validation setup for hyperparameter tuning. While tuning is mentioned, the specific split for validation is not detailed.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in their implementation.
Experiment Setup Yes We fixed the number of basis vectors k = 128 and the number of nearest neighbors used for Laplacian graph counstruction p = 5. ... We found a set of optimal parameters for our supervised classification case: λ = 0.1, α = 104, µ = 1, we fixed them also for STSC. Then, we tuned the SVM term weight κ and got an optimal value for it κ = 0.35. SVM coefficient was set c = 1. The number of iterations for TSC and STSC was T = 100.