Neuronal Synchrony in Complex-Valued Deep Networks
Authors: David P. Reichert; Thomas Serre
ICLR 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using this approach, in Section 3 we underpin our argument with several simple experiments, focusing on binding by synchrony. |
| Researcher Affiliation | Academia | David P. Reichert DAVID_REICHERT@BROWN.EDU Thomas Serre THOMAS_SERRE@BROWN.EDU Department of Cognitive, Linguistic & Psychological Sciences, Brown University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using the Pylearn2 framework and provides links to supplementary videos, but does not state that the code for their specific implementation is open-source or provide a repository link. |
| Open Datasets | Yes | We trained a DBM with one hidden layer (a restricted Boltzmann machine, Smolensky, 1986) on a version of the classic bars problem (Földiák, 1990)... |
| Dataset Splits | No | The paper mentions 'All datasets had 60,000 training images, and were divided into mini-batches of size 100' but does not specify explicit training/validation/test splits, nor does it refer to predefined splits or cross-validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper mentions the 'Pylearn2 framework of Goodfellow et al. (2013b)' but does not specify a version number for Pylearn2 or other key software components used in the experiments. |
| Experiment Setup | Yes | Layers were trained with 60 epochs of 1-step contrastive divergence (Hinton, 2002; learning rate 0.1, momentum 0.5, weight decay 10 4...), with the exception of the model trained on MNIST+shape, where 5-step persistent contrastive divergence (Tieleman, 2008) was used instead (learning rate 0.005, with exponential decay factor of 1 + 1.5 10 5). All datasets had 60,000 training images, and were divided into mini-batches of size 100. Biases were initialized to -4 to encourage sparse representations... Initial weights were drawn randomly from a uniform distribution with support [ 0.05,0.05]. The number of hidden layers, number of hidden units, and sizes of the receptive fields were varied from experiment to experiment to demonstrate various properties of neuronal synchronization in the networks (after conversion to complex values). |