Non-convex Statistical Optimization for Sparse Tensor Graphical Model

Authors: Wei Sun, Zhaoran Wang, Han Liu, Guang Cheng

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical results are backed by thorough numerical studies. Finally, we conduct extensive experiments to evaluate the numerical performance of the proposed alternating minimization method.
Researcher Affiliation Collaboration Wei Sun Yahoo Labs Sunnyvale, CA sunweisurrey@yahoo-inc.com Zhaoran Wang Department of Operations Research and Financial Engineering Princeton University Princeton, NJ zhaoran@princeton.edu Han Liu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ hanliu@princeton.edu Guang Cheng Department of Statistics Purdue University West Lafayette, IN chengg@stat.purdue.edu
Pseudocode Yes Algorithm 1 Solve sparse tensor graphical model via Tensor lasso (Tlasso)
Open Source Code No The paper does not provide any specific links or explicit statements about releasing the source code for the methodology described.
Open Datasets Yes Also, in the example of microarray study for aging [3], thousands of gene expression measurements are recorded on 16 tissue types on 40 mice with varying ages, which forms a four way gene-tissue-mouse-age tensor. [3] J. Zahn, S. Poosala, A. Owen, D. Ingram, et al. AGEMAP: A gene expression database for aging in mice. PLOS Genetics, 3:2326 2337, 2007.
Dataset Splits No The paper describes different scenarios for simulations (s1, s2, s3) based on sample size (n) and dimensions (m_k), but it does not specify any training, validation, or test dataset splits, nor does it mention cross-validation.
Hardware Specification No The paper does not mention any specific hardware used for running the experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper mentions 'glasso algorithm [21]' and 'huge package [29]' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes In our Tlasso algorithm we set the initialization of k-th precision matrix as 1mk for each k = 1, . . . , K and the total iteration T = 1. The tuning parameter λk is set as 20 log mk/(nmmk).