Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians
Authors: Tom Huix, Anna Korba, Alain Oliviero Durmus, Eric Moulines
ICML 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the validity of the rate derived in Corollary 5 with simple experiments. (Section 3) and We finally test numerically the validity of Theorem 7 in a simple setting. (Section 4) and Appendix F outlines the setup used for the numerical experiments. |
| Researcher Affiliation | Academia | 1CMAP, Ecole polytechnique 2ENSAE, CREST, IP Paris. |
| Pseudocode | No | The paper describes the algorithm using mathematical equations (e.g., Equation 6) and prose, but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide links to a code repository. |
| Open Datasets | No | The target distribution µ is chosen to be a Gaussian mixture with 100 components... The components (x i )i 100 are randomly sampled from a normal distribution N(0, σ2Id)... The variational family used for the experiments is the family of Gaussian mixtures with 10 components... At the beginning of the training, the mean of each component (xi)i 10 is randomly initialized, sampled from a normal distribution N(0, ζ2Id)... (Appendix F). This indicates data was generated for the experiments and no public access information is provided. |
| Dataset Splits | No | The paper describes the generation of data for experiments, but it does not specify explicit training, validation, and test dataset splits with percentages or counts. |
| Hardware Specification | No | This work was granted access to the HPC resources of IDRIS under the allocation AD011013313R2 made by GENCI (Grand Equipement National de Calcul Intensif). (Acknowledgements). This is a general mention of HPC resources without specific hardware details (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers, that are needed to replicate the experiment. |
| Experiment Setup | Yes | The target distribution µ is chosen to be a Gaussian mixture with 100 components... (x i )i 100 are randomly sampled from a normal distribution N(0, σ2Id), where σ = 5 in all experiments. The standard deviation of the target is set to ϵ = ϵ0/d, where ϵ0 = 1 in our setting... The mean of each component (xi)i 10 is randomly initialized, sampled from a normal distribution N(0, ζ2Id), where ζ = 15... The step-size is set as γ = γ0/d, where γ0 = 0.01. (Appendix F) and The expectations in (5) with respect to the Gaussian kernel are estimated by Monte Carlo with 100 samples. (Section 3) |