Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block Decomposition

Authors: Xiwen WANG, Jiaxi Ying, Daniel Palomar

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The synthetic and real-world experiments demonstrate that our proposed method presents a significant speed-up compared to the state-of-the-art benchmarks.
Researcher Affiliation Academia Xiwen Wang1, Jiaxi Ying1,2 , Daniel P. Palomar1 The Hong Kong University of Science and Technology1 HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute2 {xwangew, jx.ying}@connect.ust.hk, palomar@ust.hk
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes 2Codes are available in https://github.com/Xiwen1997/mtp2-bbd.
Open Datasets Yes We consider learning the MTP2 GGM for the Crop Image dataset available from the UCR Time Series Archive [56].
Dataset Splits No The paper does not explicitly provide train/validation/test dataset splits with specific percentages, counts, or references to predefined splits for the main experiments. For the real-world dataset, it mentions using 'the first 10 observations. The remaining 36 observations were used to calculate the out-of-sample log-likelihood' for a specific test on MTP2 assumption, but this is not the general experimental setup for model training and evaluation.
Hardware Specification Yes All experiments were conducted on 2.60GHZ Xeon Gold 6348 machines and Linux OS.
Software Dependencies No All methods are implemented in MATLAB and the state-of-the-art methods we consider includes BCD: Block Coordinate Descent [19]... (no version numbers provided for MATLAB or any specific libraries).
Experiment Setup Yes We begin with an underlying graph that has an adjacency matrix A Sp, and define Θ = δI A, where δ = 1.05 λmax (A) and λmax (A) represents the largest eigenvalue of A. ... We then sample n = 10p data points from a Gaussian distribution N(0, Θ 1) and calculate the sample covariance matrix as S. ... we set Λij = χ (Θ(0) ij + ϵ) when i = j and Λij = 0 when i = j. Here, χ > 0 determines the sparsity level and ε is a small positive constant, such as 10 3. ... For the real-world experiment, The regularization matrix Λ is determined using the approach in Section 4.1 with ϵ = 0.01 and χ = 0.2.