Unsupervised Feature Learning by Deep Sparse Coding
Authors: Yunlong He; Koray Kavukcuoglu; Yun Wang; Arthur Szlam; Yanjun Qi
ICLR 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of Deep SC framework for image classification on three data sets: Caltech-101 [7] , Caltech-256 [11] and 15-Scene. For each data set, the average per-class recognition accuracy is reported. Each reported number is the average of 10 repeated evaluations with random selected training and testing images. |
| Researcher Affiliation | Collaboration | heyunlong@gatech.edu, Georgia Institute of Technology koray@deepmind.com, Deep Mind Technologies yunwang@princeton.edu, Princeton University aszlam@ccny.cuny.edu ,The City College of New York yanjun@virginia.edu, University of Virginia |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any statements or links regarding the public release of its source code. |
| Open Datasets | Yes | In this section, we evaluate the performance of Deep SC framework for image classification on three data sets: Caltech-101 [7] , Caltech-256 [11] and 15-Scene. |
| Dataset Splits | Yes | The parameters of DRLIM and the parameter to control sparsity in the sparse coding are selected layer by layer through cross-validation. The number of training images per class for the three data sets is set as 30 for Caltech-101, 60 for Caltech-256, and 100 for 15-Scene respectively. Each reported number is the average of 10 repeated evaluations with random selected training and testing images. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions 'Lib SVM toolkit [5]' but does not provide specific version numbers for it or any other software dependencies. |
| Experiment Setup | Yes | For each image, following [4], we sample 16 x 16 image patches with 4-pixel spacing and use 128 dimensional SIFT feature as the basic dense feature descriptors. The parameters of DRLIM and the parameter to control sparsity in the sparse coding are selected layer by layer through cross-validation. On each of the three data sets, we consider three settings where the dimension of the sparse codes K is 1024, 2048 and 4096. |