SC2Net: Sparse LSTMs for Sparse Coding

Authors: Joey Tianyi Zhou, Kai Di, Jiawei Du, Xi Peng, Hao Yang, Sinno Jialin Pan, Ivor Tsang, Yong Liu, Zheng Qin, Rick Siow Mong Goh

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the effectiveness of our method on both unsupervised and supervised tasks.
Researcher Affiliation Collaboration Institute of High Performance Computing, A*STAR, Singapore, 2College of Computer Science, Sichuan University, China, 3Amazon, Seattle, USA, 4Nanyang Technological University, Singapore, 5University of Technology Sydney, Australia
Pseudocode No The paper describes the mathematical formulations and update rules for ISTA and SLSTM (e.g., equations 5, 6, 7, 8), but it does not present any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for their proposed method.
Open Datasets Yes We evaluate the performance of these methods using MINST 2 and CIFAR-10 (Krizhevsky 2009). MINST contains 60,000 training images and 10,000 testing images sampled from the digits (0-9), where each image is of the size of 28 28. The CIFAR-10 dataset is another widely used benchmark, which contains 60,000 32 32 3 color images distributed over 10 subjects.
Dataset Splits Yes MINST contains 60,000 training images and 10,000 testing images sampled from the digits (0-9)... The CIFAR-10 dataset is another widely used benchmark, which contains 60,000 32 32 3 color images distributed over 10 subjects.
Hardware Specification Yes to train all neural-networkbased approaches with a GPU of NVIDIA TITAN X using Keras.
Software Dependencies No The paper mentions "using Keras" for training, but does not provide specific version numbers for Keras or any other software libraries or dependencies.
Experiment Setup Yes Thus, we fix the sparsity parameter λ = 0.1 for all evaluated methods in the unsupervised learning setting.