High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm
Authors: Rongda Zhu, Lingxiao Wang, Chengxiang Zhai, Quanquan Gu
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | demonstrate the advantages of our algorithm by both theoretical analysis and numerical experiments. and In this section, we present experiment results to validate our theory. |
| Researcher Affiliation | Collaboration | 1Facebook, Inc., Menlo Park, CA 94025 2Department of Computer Science, University of Virginia, Charlottesville, VA 22904, USA 3Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL 61801. |
| Pseudocode | Yes | Algorithm 1 Variance Reduced Stochastic Gradient EM Algorithm (VRSGEM) |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes generating synthetic data for experiments based on specified parameters (e.g., 'The covariance matrix Σ of V is chosen to be a diagonal matrix with all elements being 1. We randomly set two elements to λmax(Σ) = 10, and another two elements to λmin(Σ) = 0.1.'), but does not provide access information (link, DOI, or citation) to a publicly available dataset. |
| Dataset Splits | No | The paper describes overall experiment settings including sample sizes (N), but does not explicitly provide specific train/validation/test dataset splits (percentages or counts) or refer to standard predefined splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any ancillary software dependencies (e.g., programming languages, libraries, or frameworks) used in the experiments. |
| Experiment Setup | Yes | All the comparisons are under two different parameter settings: s = 5, d = 256, b = 100, N = 5000 and s = 10, d = 512, b = 200, N = 10000. For VRSGEM, we choose m = 30, n = 50 and T = 50 across all settings and models. ... The learning rate η is tuned by grid search and s is chosen by cross validation. We use random initialization. |