On Separability of Loss Functions, and Revisiting Discriminative Vs Generative Models

Authors: Adarsh Prasad, Alexandru Niculescu-Mizil, Pradeep K. Ravikumar

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We instantiate our results with two running examples of isotropic and non-isotropic Gaussian generative models, and also corroborate our theory with instructive simulations. and 6 Experiments: High Dimensional Classification
Researcher Affiliation Collaboration Adarsh Prasad Machine Learning Dept. CMU adarshp@andrew.cmu.edu, Alexandru Niculescu-Mizil NEC Laboratories America Princeton, NJ, USA alex@nec-labs.com, Pradeep Ravikumar Machine Learning Dept. CMU pradeepr@cs.cmu.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any information about open-source code availability for the described methodology.
Open Datasets No For our experimental setup, we consider isotropic Gaussian models with means µ0 = 1p 1 ps , µ1 = 1p + 1 ps , and vary the sparsity level s.
Dataset Splits No The paper describes generating data for simulations and averaging results over 20 trials, rather than using explicit training/validation/test splits from a fixed dataset.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper does not provide any specific software dependencies with version numbers.
Experiment Setup Yes For both methods, we set the regularization parameter 2 as λn = log(p)/n. and we introduce a thresholded generative estimator that has two stages: (a) compute b diff using the generative model estimates, and (b) soft-threshold the generative estimate with λn = c n for some constant c.