Learning Latent Variable Gaussian Graphical Models

Authors: Zhaoshi Meng, Brian Eriksson, Al Hero

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results are shown in Section 5 and we conclude in Section 6. We use a set of simulations on synthetic data to verify our reduced effective rank assumption on the covariance matrix of LVGGM, and the derived error bounds in Theorem 2.
Researcher Affiliation Collaboration Zhaoshi Meng MENGZS@UMICH.EDU Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA; Brian Eriksson BRIAN.ERIKSSON@TECHNICOLOR.COM Technicolor Research Center, 735 Emerson Street, Palo Alto, CA 94301, USA; Alfred O. Hero III HERO@EECS.UMICH.EDU Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it include specific repository links or explicit code release statements.
Open Datasets No The paper uses synthetic data generated for simulations ('We generate LVGGM with independent latent variables', 'We simulate LVGGM data') and does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year) for a publicly available or open dataset.
Dataset Splits No The paper mentions drawing 'n samples from the LVGGM', but it does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Efficient convex solver, such as Ma et al. (2013)', but does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We fix the number of latent variables to be 10, and vary the number of observed variables p = {80, 120, 200, 500}. By scaling the magnitudes of the elements in the latent variable submatrix, we sweep through the relative energy ratio between the global and local factors, i.e., Tr(G)/Tr(S 1) from 0.1 to 10. We simulate LVGGM data with number of observed variables p = {160, 200, 320, 400} and number of latent variables in the set r = {0.1, 0.15, 0.2, 0.3}p. The sparse conditional GGM is a chain graph whose associated precision matrix is tridiagonal with off-diagonal elements Si,i 1 = Si,i+1 = 0.4Si,i for i = {2, . . . , p 1}. we set the regularization parameters as λ = Ca s log(p)/n and µ = Cbρ reff log(p)/n , where constants Ca and Cb are cross-validated and then fixed for all test data sets with different configurations.