Training Neural Networks is NP-Hard in Fixed Dimension

Authors: Vincent Froese, Christoph Hertrich

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the parameterized complexity of training two-layer neural networks...Our results settle the complexity status regarding these parameters almost completely. In this paper, we answer Question 1 negatively by showing NP-hardness for d = 2 in Theorem 1...We prove Theorem 6 with a parameterized reduction from the 2-HYPERPLANE SEPARABILITY problem.
Researcher Affiliation Academia Vincent Froese Algorithmics and Computational Complexity Faculty IV, TU Berlin Berlin, Germany vincent.froese@tu-berlin.de Christoph Hertrich Department of Mathematics London School of Economics and Political Science London, UK c.hertrich@lse.ac.uk
Pseudocode No The paper describes algorithms and reductions in prose but does not include any formally structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about providing open-source code for the described methodology, nor does it include any links to code repositories.
Open Datasets No The paper is theoretical and does not conduct experiments using specific datasets, thus no information on public dataset availability for training is provided.
Dataset Splits No The paper is theoretical and does not describe empirical experiments, so it does not discuss training/validation/test dataset splits.
Hardware Specification No The paper is theoretical and does not report on empirical experiments; therefore, it does not specify any hardware used.
Software Dependencies No The paper is theoretical and does not describe software implementation details; therefore, it does not list specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on computational complexity; it does not describe the setup of empirical experiments or hyperparameter details.