Finite-Sample Maximum Likelihood Estimation of Location

Authors: Shivam Gupta, Jasper Lee, Eric Price, Paul Valiant

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we give experimental evidence supporting our proposed algorithmic theory. Our goals are to demonstrate that 1) r-smoothing is a beneficial pre-processing to the MLE, that r-smoothed Fisher information does capture the algorithmic performance in location estimation and 2) r-smoothed MLE can outperform the standard MLE, as well as standard mean estimation algorithms which do not leverage information about the distribution shape.
Researcher Affiliation Academia Shivam Gupta The University of Texas at Austin shivamgupta@utexas.edu Jasper C.H. Lee University of Wisconsin Madison jasper.lee@wisc.edu Eric Price The University of Texas at Austin ecprice@cs.utexas.edu Paul Valiant Purdue University pvaliant@gmail.com
Pseudocode Yes Algorithm 1 Local MLE for known parametric model; Algorithm 2 Global MLE for known parametric model
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for its methodology.
Open Datasets No The paper uses a 'Gaussian-spiked Laplace model for experiments' which appears to be a synthetic model generated by the authors, not a publicly available dataset with concrete access information.
Dataset Splits No The paper does not specify training, validation, or test dataset splits. It mentions using 'n samples' but no partitioning details for these subsets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list any specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes We use the Gaussian-spiked Laplace model for experiments, with a Laplace distribution of density proportional to e |x|, and a Gaussian of mass 0.001 and width roughly 0.002 (the discretization granularity) added at x = 4. The x-axis varies the number of samples n from 50 to 5000, and the y-axis varies the smoothing parameter r from 0.001 to 1 in log scale.