Embedding Principle of Loss Landscape of Deep Neural Networks

Authors: Yaoyu Zhang, Zhongwang Zhang, Tao Luo, Zhiqin J Xu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we find that a wide DNN is often attracted by highly-degenerate critical points that are embedded from narrow DNNs. Overall, our work provides a skeleton for the study of loss landscape of DNNs and its implication, by which a more exact and comprehensive understanding can be anticipated in the near future. Numerical experiments
Researcher Affiliation Academia 1 School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC and Qing Yuan Research Institute, Shanghai Jiao Tong University 2 Shanghai Center for Brain Science and Brain-Inspired Technology
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link for the open-source code of the methodology described.
Open Datasets Yes We train a two-layer NN of width msmall = 2 to learn data of Fig. 1 shown in Fig. 3(a) or Iris dataset (Fisher, 1936) in Fig. 3(b) to a critical point. We train a width-400 two-layer Re LU NN fθ = Pm k=1 akσ(w T k x) ( x = [x , 1] ) on 1000 training samples of the MNIST dataset with small initialization.
Dataset Splits No The paper refers to
Hardware Specification No The paper mentions running experiments on the
Software Dependencies No The paper mentions using
Experiment Setup Yes Experimental setup. Throughout this work, we use two-layer fully-connected neural network with size d-m-dout. The input dimension d is determined by the training data. The output dimension dout is different for different experiments. The number of hidden neurons m is specified in each experiment. All parameters are initialized by a Gaussian distribution with mean zero and variance specified in each experiment. We use MSE loss trained by full batch gradient descent for 1D fitting problems (Figs. 1, 3(a) and 4), and default Adam optimizer with full batch for others. The learning rate is fixed throughout the training.