Efficient Deep Approximation of GMMs

Authors: Shirin Jalali, Carl Nuzman, Iraj Saniee

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The main result of this paper is that the discriminant functions described in (1), required for computing optimal classification function C (x), can be approximated accurately by a relatively small neural network with two hidden layers, but that accurate approximation with a single hidden layer network is only possible if either the number of the nodes or the magnitudes of the coefficients are exponentially large in n. Before stating our main results, in this section, we establish a connection between the accuracy in approximating the discriminant functions of a classifier and the error performance of a classifier that employs these approximations.
Researcher Affiliation Industry Shirin Jalali, Carl Nuzman, Iraj Saniee Bell Labs, Nokia 600-700 Mountain Avenue Murray Hill, NJ 07974 {shirin.jalali,carl.nuzman,iraj.saniee}@nokia-bell-labs.com
Pseudocode No The paper focuses on theoretical proofs and mathematical derivations and does not include any pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and does not mention providing open-source code for the described methodology.
Open Datasets No The paper is purely theoretical, focusing on mathematical proofs and analysis, and does not involve the use of datasets for training.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets, thus no dataset split information (training, validation, test) is provided.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers as there are no empirical experiments or code implementations discussed.
Experiment Setup No The paper is theoretical and does not include details about an experimental setup, hyperparameters, or training settings.