Doubly Sparsifying Network

Authors: Zhangyang Wang, Shuai Huang, Jiayu Zhou, Thomas S. Huang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare DSN against a few carefully-designed baselines, and verify its consistently superior performance in a wide range of settings. DSN is evaluated for two mainstream tasks: electroencephalographic (EEG) signal classification and blood oxygenation level dependent (BOLD) response prediction, and achieves promising results in both cases. In the simulation experiments, we use the first 60, 000 samples of the MNIST dataset for training and the last 10,000 for testing. Table 1 clearly shows the performance advantage of DSN over the competitive CNN-SAE method.
Researcher Affiliation Academia Zhangyang Wang , Shuai Huang , Jiayu Zhou , Thomas S. Huang Department of Computer Science and Engineering, Texas A&M University Department of Industrial and Systems Engineering, University of Washington Department of Computer Science and Engineering, Michigan State University Beckman Institute, University of Illinois at Urbana-Champaign atlaswang@tamu.edu, shuaih@uw.edu, jiayuz@msu.edu, t-huang1@illinois.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes algorithms like ISTA and PGD using mathematical equations and textual descriptions.
Open Source Code No The paper does not provide concrete access to source code (no specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes In the simulation experiments, we use the first 60, 000 samples of the MNIST dataset for training and the last 10,000 for testing. The proposed DSN is implemented with the CUDA Conv Net package [Krizhevsky et al., 2012]. We follow [Tabar and Halici, 2016] to adopt the benchmark dataset 2b from BCI Competition IV (training set) [Schl ogl, 2003]. We refer to Kendrick Kay et.al. s publicly available datasets of BOLD responses in visual cortex, measured by functional magnetic resonance imaging (f MRI) in human subjects. We use their stimulus set 2, stimulus set 3, (response) dataset 4, and (response) dataset 5. Details about the datasets could be found at [Kay et al., 2013].
Dataset Splits Yes We manually decrease the learning rate when the validation error of the network stops decreasing. In the simulation experiments, we use the first 60, 000 samples of the MNIST dataset for training and the last 10,000 for testing. A small subset of size ts is drawn from XΣ (the MNIST dataset with t = 60, 000 samples), where each class is sampled proportionally. In each session, we randomly select 90% of 400 trials for training and the remaining 10% for testing, and report the mean accuracy of 10 runs.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It mentions CUDA Conv Net package, but not the specific hardware.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers). It mentions "CUDA Conv Net package [Krizhevsky et al., 2012]" but without a specific version number.
Experiment Setup Yes We use a constant learning rate of 0.01, with the momentum parameter fixed at 0.9, and a batch size of 128. We manually decrease the learning rate when the validation error of the network stops decreasing. The default configuration parameters are s = 14n, m = 1, 024, t = 60, 000, and k = 2. We use a 1-layer CNN with 10 channels of 1-D convolutional filter of size 20. Other configuration hyperparameters are chosen as k = 3, m = 256, and s = 64. Due to the very simple nature of the used visual stimuli, we vectorize and project each image into a 128-d vector for input, and choose k = 2, m = 256, and s = 64.