Convolutional dictionary learning based auto-encoders for natural exponential-family distributions
Authors: Bahareh Tolooshams, Andrew Song, Simona Temereanca, Demba Ba
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We 1) demonstrate that fitting the generative model to learn, in an unsupervised fashion, the latent stimulus that underlies neural spiking data leads to better goodness-of-fit compared to other baselines, 2) show competitive performance compared to state-of-the-art algorithms for supervised Poisson image denoising, with significantly fewer parameters, and 3) characterize the gradient dynamics of the shallow binomial auto-encoder. 5. Experiments We apply our framework in three different settings. Poisson image denoising (supervised) We evaluate the performance of supervised DCEA in Poisson image denoising and compare it to state-of-the-art algorithms. ECDL for simulation (unsupervised) We use simulations to examine how the unsupervised DCEA performs ECDL for binomial data. With access to ground-truth data, we evaluate the accuracy of the learned dictionary. Additionally, we conduct ablation studies in which we relax the constraints on DCEA and assess how accuracy changes. ECDL for neural spiking data (unsupervised) Using neural spiking data collected from mice (Temereanca et al., 2008), we perform unsupervised ECDL using DCEA. As is common in the analysis of neural data (Truccolo et al., 2005), we assume a binomial generative model. |
| Researcher Affiliation | Academia | 1School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 2Massachusetts Institute of Technology, Cambridge, MA 3Brown University, Boston, MA. Correspondence to: Bahareh Tolooshams <btolooshams@seas.harvard.edu>. |
| Pseudocode | Yes | This decoder completes the forward pass of DCEA, also summarized in Algorithm 2 of the Appendix. |
| Open Source Code | Yes | 1The code can be found at https://github.com/btolooshams/dea |
| Open Datasets | Yes | We used the PASCAL VOC image set (Everingham et al., 2012) containing J = 5,700 training images. We now apply DCEA to neural spiking data from somatosensory thalamus of rats recorded in response to periodic whisker deflections (Temereanca et al., 2008). |
| Dataset Splits | Yes | DCEA is trained in a supervised manner on the PASCAL VOC image set (Everingham et al., 2012) containing J = 5,700 training images. [...] We used two test datasets: 1) Set12 (12 images) and 2) BSD68 (68 images) (Martin et al., 2001). Given µj, we simulated two sets of binary time-series, each with Mj = 25, yj {0, 1, . . . , 25}500, one of which is used for training and the other for validation. We used 30 trials from each neuron to learn h1 and the remaining 20 trials as a test set to assess goodnessof-fit. |
| Hardware Specification | No | The paper mentions running DCEA on GPU for runtime comparison but does not specify the model or any other hardware specifications (CPU, memory, etc.). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used. |
| Experiment Setup | Yes | We unfolded the encoder for T = 15 iterations. We initialized the filters using draws from a standard Gaussian distribution scaled by p 1/L... We used the ADAM optimizer with an initial learning rate of 10-3, which we decrease by a factor of 0.8 every 25 epochs, and trained the network for 400 epochs. At every iteration, we crop a random 128 128 patch... We set λ = 0.119 for DCEA and set the sparsity level of BCOMP to 16. |