Critical Learning Periods in Deep Networks
Authors: Alessandro Achille, Matteo Rovere, Stefano Soatto
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Published as a conference paper at ICLR 2019... In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models. This surprising result suggests that critical periods may arise from information processing, rather than biochemical, phenomena. ...Our findings, described in Section 2, indicate that the early transient is critical... 2 EXPERIMENTS |
| Researcher Affiliation | Academia | Alessandro Achille Department of Computer Science University of California, Los Angeles achille@cs.ucla.edu Matteo Rovere Ann Romney Center for Neurologic Diseases Brigham and Women s Hospital and Harvard Medical School mrovere@bwh.harvard.edu Stefano Soatto Department of Computer Science University of California, Los Angeles soatto@cs.ucla.edu |
| Pseudocode | No | The paper describes network architectures (e.g., "conv 96 conv 96 conv 192 s2 conv 192 conv 192 conv 192 s2 conv 192 conv1 192 conv1 10 avg. pooling softmax") and mathematical expressions for Fisher Information, but it does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of open-source code for the described methodology, nor does it include direct links to a code repository. |
| Open Datasets | Yes | To do so, we train a standard All-CNN architecture based on Springenberg et al. (2014) (see Appendix A) to classify objects in small 32 32 images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009). ...a fully-connected network trained on the MNIST digit classification dataset also shows a critical period for the image blur deficit. |
| Dataset Splits | No | The paper describes training on CIFAR-10 and MNIST datasets and explicitly mentions a 'test set' for evaluation. However, it does not explicitly provide specific details about a separate 'validation' dataset split (e.g., percentages, sample counts, or methodology for its creation) for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper discusses the training of neural networks on datasets like CIFAR-10 and MNIST, but it does not provide specific details about the hardware used for these experiments (e.g., GPU models, CPU types, or cloud resources). |
| Software Dependencies | No | The paper mentions software components and methods like "All-CNN architecture", "SGD", "Res Net-18 architecture", and "Adam" optimizer, but it does not specify exact version numbers for programming languages, deep learning frameworks, or other ancillary software dependencies required to replicate the experiments. |
| Experiment Setup | Yes | In all of the experiments, unless otherwise stated, we use the following All-CNN architecture, adapted from Springenberg et al. (2014): ... The network is trained with SGD, with a batch size of 128, learning rate starting from 0.05 and decaying smoothly by a factor of .97 at each epoch. We also use weight decay with coefficient 0.001. In the experiments with a fixed learning rate, we fix the learning rate to 0.001... For the Res Net experiments, we use the Res Net-18 architecture from He et al. (2016) with initial learning rate 0.1, learning rate decay .97 per epoch, and weight decay 0.0005. When training with Adam, we use a learning rate of 0.001 and weight decay 0.0001. |