Estimating Mixture Models via Mixtures of Polynomials

Authors: Sida Wang, Arun Tejasvi Chaganty, Percy S. Liang

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulations show good empirical performance on several models. In Table 3, we show the relative error maxk || k k||2 averaged over 10 random models of each class.
Researcher Affiliation Academia Sida I. Wang Arun Tejasvi Chaganty Percy Liang Computer Science Department, Stanford University, Stanford, CA, 94305 {sidaw,chaganty,pliang}@cs.stanford.edu
Pseudocode No The paper includes Figure 1, which provides an overview of the framework with steps, but it is not a formal pseudocode block or algorithm presented in a code-like format or explicitly labeled as 'Algorithm' or 'Pseudocode'.
Open Source Code Yes We implemented Polymom for several mixture models in Python (code: https://github. com/sidaw/polymom).
Open Datasets No The paper refers to 'T' as the number of samples and describes various model types (e.g., spherical Gaussians, linear regressions), but it does not specify or provide access information (link, DOI, formal citation) for any publicly available or open datasets used for training.
Dataset Splits No The paper mentions 'T is the number of samples' but does not provide specific dataset split information, such as exact percentages, sample counts, or citations to predefined splits, needed to reproduce the data partitioning for training, validation, or testing.
Hardware Specification No The paper states it used 'CVXOPT' and 'Python' for implementation, but it does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions 'Python' and 'CVXOPT' as software used, but it does not provide specific version numbers for these ancillary software components, which are necessary for full reproducibility.
Experiment Setup Yes The paper specifies experimental setup details for baselines, such as 'EM: sklearn initialized with k-means using 5 random restarts'.