The Sample Complexity of Semi-Supervised Learning with Nonparametric Mixture Models
Authors: Chen Dan, Liu Leqi, Bryon Aragam, Pradeep K. Ravikumar, Eric P. Xing
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we describe three algorithms for computing these estimators based on a connection to bipartite graph matching, and perform experiments to illustrate the superiority of the MLE over the majority vote estimator. and In order to evaluate the relative performance of the proposed estimators in practice, we implemented each of the three methods described in Section 5 on simulated and real data. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University 2Petuum Inc. {cdan,leqil,naragam,pradeepr,epxing}@cs.cmu.edu |
| Pseudocode | No | The paper describes algorithms in Section 5 in prose, but does not include structured pseudocode blocks or clearly labeled algorithm figures. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement, or mention of code in supplementary materials) for the methodology described. |
| Open Datasets | Yes | MNIST To approximate real data, we used training data from the MNIST dataset to build K = 10 class conditionals f k from real data using kernel density estimates. |
| Dataset Splits | No | The paper mentions using "training data" and "labeled samples" but does not provide specific percentages, sample counts, or explicit methodology for training/validation/test splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Full details of the experiments can be found in Appendix A. and In each experiment, a random true mixture model Λ was generated from one of these settings, and then N = 99 labeled samples were drawn from this mixture model... This procedure was repeated T = 50 times (holding Λ and Λ fixed). |