Piecewise linear activations substantially shape the loss surfaces of neural networks
Authors: Fengxiang He, Bohan Wang, Dacheng Tao
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We first prove that the loss surfaces of many neural networks have infinite spurious local minima which are defined as the local minima with higher empirical risks than the global minima. Our result demonstrates that the networks with piecewise linear activations possess substantial differences to the well-studied linear neural networks. This result holds for any neural network with arbitrary depth and arbitrary piecewise linear activation functions (excluding linear functions) under most loss functions in practice. |
| Researcher Affiliation | Academia | Fengxiang He , Bohan Wang & Dacheng Tao UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering The University of Sydney Darlington, NSW 2008, Australia {fengxiang.he, dacheng.tao}@sydney.edu.au, bhwangfy@gmail.com |
| Pseudocode | No | The paper contains mathematical derivations and proof skeletons but no explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the release of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not involve experimental training on a dataset. The term 'training sample set' is used for theoretical definitions, not to describe actual data used in experiments. |
| Dataset Splits | No | The paper is theoretical and does not describe any experiments that would require dataset splits for validation. |
| Hardware Specification | No | The paper is theoretical and does not report on experimental work, thus no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not report on experimental work, thus no software dependencies with version numbers are listed. |
| Experiment Setup | No | The paper is theoretical and does not report on experimental work, thus no experimental setup details like hyperparameters or training settings are provided. |