Sparse Dictionary Learning by Dynamical Neural Networks

Authors: Tsung-Han Lin, Ping Tak Peter Tang

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using spiking neurons to construct our dynamical network, we present a learning process, its rigorous mathematical analysis, and numerical results on several dictionary learning problems.
Researcher Affiliation Industry Tsung-Han Lin, Ping Tak Peter Tang Intel Corporation Santa Clara, CA {tsung-han.lin,peter.tang}@intel.com
Pseudocode Yes Algorithm 1 Dictionary Learning
Open Source Code No The paper does not provide any statement or link regarding the availability of its source code.
Open Datasets Yes Dataset A. 100K randomly sampled 8 8 patches from the grayscale Lena image to learn 256 atoms. Dataset B. 50K 28 28 MNIST images (Le Cun et al., 1998) to learn 512 atoms. Dataset C. 200K randomly sampled 16 16 patches from whitened natural scenes (Olshausen & Field, 1996) to learn 1024 atoms.
Dataset Splits No The paper mentions using datasets and a test set, but does not provide specific train/validation/test splits or methodologies for creating them.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For each input, the network is ran with γ = 0 from t = 0 to t = 20 and with γ = 0.7 from t = 20 to t = 40, both with a discrete time step of 1/32.