Training Neural Networks is ER-complete
Authors: Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is R-complete. In this paper, we show that it is R-complete to decide if there exists weights and biases that will result in a cost below a given threshold. |
| Researcher Affiliation | Academia | Mikkel Abrahamsen University of Copenhagen miab@di.ku.dk Linda Kleist Technische Universtität Braunschweig kleist@ibr.cs.tu-bs.de Tillmann Miltzow Utrecht University t.miltzow@uu.nl |
| Pseudocode | No | The paper describes the reduction and the construction process using narrative text and figures, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that open-source code for the described methodology is available. Under '3. If you ran experiments...', the authors explicitly state '[N/A]' for providing code. |
| Open Datasets | No | The paper is theoretical and focuses on algorithmic complexity, not empirical evaluation. It defines NN-TRAINING with a conceptual 'training data' (D) but does not use or provide access to a specific, publicly available dataset for experiments. Under '3. If you ran experiments...', the authors explicitly state '[N/A]' for data. |
| Dataset Splits | No | The paper does not describe any experiments involving data splits for training, validation, or testing, as it is a theoretical work. Under '3. If you ran experiments...', the authors explicitly state '[N/A]' for specifying all training details. |
| Hardware Specification | No | The paper is theoretical and does not report on empirical experiments that would require specific hardware. Under '3. If you ran experiments...', the authors explicitly state '[N/A]' for including the total amount of compute and the type of resources used. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers as it is a theoretical work and does not report on empirical experiments requiring specific software environments. |
| Experiment Setup | No | The paper is purely theoretical and does not describe any empirical experiments, thus it does not include details about an experimental setup, hyperparameters, or system-level training settings. Under '3. If you ran experiments...', the authors explicitly state '[N/A]' for specifying all training details. |