Feature Learning and Signal Propagation in Deep Neural Networks

Authors: Yizhang Lou, Chris E Mingard, Soufiane Hayou

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate an excellent match with the theoretical predictions.
Researcher Affiliation Academia 1St John s College, University of Oxford, Oxford, UK 2PTCL, University of Oxford, Oxford, UK 3Department of Physics, University of Oxford, UK 4Department of Mathematics, National University of Singapore.
Pseudocode Yes Algorithm 1 Layer-wise maximisation of features
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a direct link to a code repository for the methodology described.
Open Datasets Yes Example Alignment Hierarchies for 20 layer FFNNs trained on Fashion MNIST (L) and CIFAR10 (R). ... fully-connected networks trained on an FFNN with depth 10 and width 256. ...trained on the MNIST/Fashion MNIST/CIFAR10 datasets.
Dataset Splits Yes training and validation set split of 45000/5000
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models) used for its experiments.
Software Dependencies No The paper mentions optimizers like SGD, but does not provide specific version numbers for software dependencies or libraries used.
Experiment Setup Yes optimised with SGD with weight decay, momentum, and learning rates of 0.003. ... See Fig. 8 for clear demonstration of the change in C with learning rate. ... Tables 1, 2, and 3 detail depth, width, learning rate, and epochs used.