Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models
Authors: Tudor Manole, Nhat Ho
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical findings are supported by a simulation study to illustrate these improved convergence rates. |
| Researcher Affiliation | Academia | 1Department of Statistics and Data Science, Carnegie Mellon University 2Department of Statistics and Data Sciences, University of Texas, Austin. |
| Pseudocode | Yes | Algorithm 1 Modified EM Algorithm. |
| Open Source Code | Yes | All code for reproducing our simulation study is publicly available.1 https://github.com/tmanole/Refined-Mixture-Rates |
| Open Datasets | No | The paper uses synthetic data generated for the simulation study rather than publicly available datasets. For each model, we generate 20 samples of size n, for 100 different choices of n between 10^2 and 10^5. |
| Dataset Splits | No | The paper describes generating synthetic data for a simulation study and does not specify explicit training, validation, or test dataset splits. |
| Hardware Specification | No | All simulations hereafter were performed in Python 3.7 on a standard Unix machine, which is not a specific hardware detail. |
| Software Dependencies | Yes | All simulations hereafter were performed in Python 3.7 |
| Experiment Setup | Yes | We chose the convergence criteria ϵ = 10^-8 and T = 2,000. Since our aim is to illustrate theoretical properties of the estimator b Gn, we initialized the EM algorithm favourably. In particular, for any given k and k0, and for each replication, we randomly partitioned the set {1, . . . , k} into k0 index sets I1, . . . , Ik0, each containing at least one point. We then sampled θ(0)j (resp. Σ(0)j) from a Gaussian distribution with vanishing covariance, centered at θ0ℓ(resp. Σ0ℓ), where ℓis the unique index such that j Iℓ. |