Elementary Estimators for Graphical Models

Authors: Eunho Yang, Aurelie C. Lozano, Pradeep K Ravikumar

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We corroborate this statistical performance, as well as significant computational advantages via simulations of both discrete and Gaussian graphical models. 6 Experiments In this section, we report a set of synthetic experiments corroborating our theoretical results on both Gaussian and discrete graphical models.
Researcher Affiliation Collaboration Eunho Yang IBM T.J. Watson Research Center eunhyang@us.ibm.com Aur elie C. Lozano IBM T.J. Watson Research Center aclozano@us.ibm.com Pradeep Ravikumar University of Texas at Austin pradeepr@cs.utexas.edu
Pseudocode No The paper describes the mathematical formulations and properties of the estimators but does not present any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or provide links to a code repository for the described methodology.
Open Datasets No To generate true inverse covariance matrices with a random sparsity structure, we follow the procedure described in [25, 24]. For each case, the size of the alphabet is set to m = 3; the true parameter vector is generated by sampling each non-zero entry from N(0, 1).
Dataset Splits No The paper mentions "cross-validation" for selecting a tuning parameter, but it does not specify a train/validation/test split for a fixed dataset, as the data is generated synthetically for each run.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using "QUIC algorithms [24]" but does not specify software dependencies with version numbers.
Experiment Setup Yes We fix the thresholding parameter = 2.5 log p/n for all settings, and vary the regularization parameter λn = K log p/n to investigate how this regularizer affects the final estimators. the size of the alphabet is set to m = 3; the tuning parameter is set to λn = c log p/n, where c is selected using cross-validation.