Bayesian Joint Estimation of Multiple Graphical Models
Authors: Lingrui Gan, Xinming Yang, Naveen Narisetty, Feng Liang
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through simulation studies and an application to the capital bike-sharing network data, we demonstrate the competitive performance of our method compared to existing alternatives.4 Numerical Studies, 4.2 Simulation Results, 4.3 Application to Capital Bikeshare Data |
| Researcher Affiliation | Academia | Department of Statistics University of Illinois at Urbana-Champaign {lgan6, xyang104, naveen, liangf}@illinois.edu |
| Pseudocode | No | For computation, we propose an EM algorithm by treating Γ pγijq as latent variables and estimating Θ by applying the following two steps iteratively: E-step: Calculate the posterior distribution Ppγij 1 | Θptqq : pijpθptq ij q, which follows the formula in (2.5), and compute the so-called Q function, the expectation of the full log-likelihood with respect to Ppγij 1 | Θptqq: M-step: The Q function is a summation of K terms with each to be a weighted graphical Lasso [9] problem. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We use Capital Bikeshare trip data to evaluate the performance of the proposed method. The data contains records of bike rentals in a bicycle sharing system with more than 500 stations... Data available at https://www.capitalbikeshare.com/system-data |
| Dataset Splits | No | Then, we take the first 80% of observations in each class as training data and the other 20% as test data. |
| Hardware Specification | Yes | The average computational times of all the methods using a Mac Book Pro with 2.9 GHz Intel Core i5 processor and 8.00 GB memory are reported in Table 3. |
| Software Dependencies | No | The paper mentions general algorithms and methods like 'graphical Lasso' and 'EM algorithm' but does not specify any software packages with version numbers (e.g., Python 3.8, PyTorch 1.9) required for replication. |
| Experiment Setup | Yes | In each setting, we set K 3 and p nk 100... For all methods, we use a grid search to select the set of hyperparamters that minimizes BIC. For BAGUS and Pooled methods, we follow the same tuning procedure in [10] and tune the spike and slab prior parameters pv0, v1q with v0 p0.25, 0.5, 0.75, 1q ˆ a 1{pn log pq and v1 p2.5, 5, 7.5, 10q ˆ a 1{pn log pq. For GGL, we tune the two penalty parameters pλ1, λ2q as in [5] with λ1 p0.1, 0.2, . . . , 1q and λ2 p0.1, 0.3, 0.5q. |