Learning Additive Exponential Family Graphical Models via $\ell_{2,1}$-norm Regularized M-Estimation
Authors: Xiaotong Yuan, Ping Li, Tong Zhang, Qingshan Liu, Guangcan Liu
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The advantages of our estimators over Gaussian graphical models and Nonparanormal estimators are demonstrated on synthetic and real data sets. |
| Researcher Affiliation | Academia | B-DAT Lab, Nanjing University of Info. Sci.&Tech. Nanjing, Jiangsu, 210044, China Depart. of Statistics and Depart. of Computer Science, Rutgers University Piscataway, NJ, 08854, USA |
| Pseudocode | No | The paper describes methods mathematically and textually but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | For the simulated data, the paper states, We generate a training sample of size n from the true graphical model, but it does not provide access information (link, DOI, or formal citation to a public repository) for this generated dataset. For the real data, it states, This data contains the historical prices of S&P500 stocks over 5 years, but no access information is provided. |
| Dataset Splits | No | The paper states, and an independent sample of the same size from the same distribution for tuning the strength parameter λn, which implies a validation process. However, it does not specify explicit percentages or sample counts for training, validation, or test splits, nor does it refer to any predefined or standard splits that would allow for reproduction. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models, or specific machine configurations. |
| Software Dependencies | No | The paper mentions using "Graphical Lasso" and "SKEPTIC" as baselines, which are established methods/packages, but it does not provide specific version numbers for these or any other software dependencies required for replication. |
| Experiment Setup | Yes | We will consider the model under different levels of sparsity by adjusting the probability P. For simplicity purpose, we assume fs(Xs) 1 and consider a nonlinear pairwise interaction function fst(Xs, Xt) = cos(π(Xs Xt)/5). We fit the data to the additive model (4) with a 2-D Fourier basis of size 8. Using Gibbs sampling, we generate a training sample of size n from the true graphical model, and an independent sample of the same size from the same distribution for tuning the strength parameter λn. We compare performance for n = 200, varying values of p {50, 100, 150, 200, 250, 300}, and different sparsity levels under P = {0.02, 0.05, 0.1}, replicated 10 times for each configuration. |