Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation

Authors: Jun Ho Yoon, Seyoung Kim

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On simulated and real-world data, we demonstrate that Ei GLasso achieves two to three orders-of-magnitude speed-up, compared to the existing methods. We present more extensive experimental results to demonstrate the performance of Ei GLasso and to provide insights into its convergence behavior. Section 5 is titled "Experiments".
Researcher Affiliation Academia Jun Ho Yoon EMAIL Seyoung Kim EMAIL Computational Biology Department School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA
Pseudocode Yes Algorithm 1: Line Search; Algorithm 2: Ei GLasso
Open Source Code Yes The Ei GLasso software is available at https://github.com/Seyoung Kim Lab/Ei GLasso.
Open Datasets Yes The mouse gene-expression data are publicly available from Gonzales et al. (2018).
Dataset Splits No The paper mentions forming smaller datasets by hierarchical clustering or using a subset of days (e.g., "We selected 10,000 genes from each tissue type", "over 500 days for the same 306 companies"), but it does not specify explicit training, validation, or test splits for any of the datasets used in experiments.
Hardware Specification Yes All experiments were run on a single core of Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz.
Software Dependencies No The paper states: "We implemented Ei GLasso in C++ with the sequential version of Intel Math Kernel Library." However, a specific version number for the Intel Math Kernel Library is not provided.
Experiment Setup Yes To assess convergence, we used the criterion that the decrease in the objective function value ft at iteration t satisfies the condition ft ft 1 / ft < ϵ for three consecutive iterations. We used ϵ = 10^-3 as convergence criterion for Ei GLasso with all K s, but ran Tera Lasso until it reached the similar objective value that Ei GLasso reached with ϵ = 10^-3, which typically required ϵ = 10^-6, 10^-7, or 10^-8. The regularization parameters were selected such that the number of non-zero elements of the estimated parameters roughly matched that of the true parameters. ... we selected the regularization parameters γ = γΘ = γΨ for Ei GLasso and used the selected γ for Tera Lasso. We selected the regularization parameters from the range [0.1, 1.0] by using Bayesian Information Criterion (BIC). We used BIC to select the optimal regularization parameter from 10 different values in the range of [0.01, 1.0].