Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Number of Linear Regions of Deep Neural Networks

Authors: Guido F. Montufar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio

NeurIPS 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically examined the behavior of a trained MLP to see if it folds the input-space in the way described above.
Researcher Affiliation Academia Guido Mont ufar Max Planck Institute for Mathematics in the Sciences EMAIL Razvan Pascanu Universit e de Montr eal EMAIL Kyunghyun Cho Universit e de Montr eal EMAIL Yoshua Bengio Universit e de Montr eal, CIFAR Fellow EMAIL
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is openly available.
Open Datasets No The paper mentions an 'Empirical Evaluation of Folding in Rectifier MLPs' and refers to 'training example' and 'inputs identified by a deep MLP', but it does not specify any named public dataset or provide access information for the data used in this empirical examination.
Dataset Splits No The paper describes an 'Empirical Evaluation' involving tracing activations and inspecting examples, but it does not provide specific details on training, validation, or test dataset splits or cross-validation setup.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper does not list specific software dependencies with version numbers used for the experiments.
Experiment Setup No The paper describes an 'Empirical Evaluation' but does not provide specific experimental setup details such as hyperparameters, optimizer settings, or training configurations.