On the Number of Linear Regions of Deep Neural Networks

Authors: Guido F. Montufar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically examined the behavior of a trained MLP to see if it folds the input-space in the way described above.
Researcher Affiliation Academia Guido Mont ufar Max Planck Institute for Mathematics in the Sciences montufar@mis.mpg.de Razvan Pascanu Universit e de Montr eal pascanur@iro.umontreal.ca Kyunghyun Cho Universit e de Montr eal kyunghyun.cho@umontreal.ca Yoshua Bengio Universit e de Montr eal, CIFAR Fellow yoshua.bengio@umontreal.ca
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is openly available.
Open Datasets No The paper mentions an 'Empirical Evaluation of Folding in Rectifier MLPs' and refers to 'training example' and 'inputs identified by a deep MLP', but it does not specify any named public dataset or provide access information for the data used in this empirical examination.
Dataset Splits No The paper describes an 'Empirical Evaluation' involving tracing activations and inspecting examples, but it does not provide specific details on training, validation, or test dataset splits or cross-validation setup.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper does not list specific software dependencies with version numbers used for the experiments.
Experiment Setup No The paper describes an 'Empirical Evaluation' but does not provide specific experimental setup details such as hyperparameters, optimizer settings, or training configurations.