Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Scalable Inference of Sparsely-changing Gaussian Markov Random Fields

Authors: Salar Fattahi, Andres Gomez

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of the proposed estimator in synthetically generated massive datasets, and a case study on the correlation network inference in stock markets. and Figure 7 depicts TPR, FPR, and the max-norm error of the estimated parameters, as well as the runtime of our algorithm for different values of d with and without parallelization.
Researcher Affiliation Academia Salar Fattahi Department of Industrial & Operations Engineering University of Michigan Ann Arbor, MI 48109 EMAIL Andrés Gómez Department of Industrial & System Engineering University of Southern California Los Angeles, CA 90089 EMAIL
Pseudocode Yes Algorithm 1 Greedy(l, u, , T) and Algorithm 2 Algorithm for solving (7)
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper mentions using 'synthetically generated massive datasets' and 'daily changes for 214 securities from 1990/01/04 to 2017/08/10'. While it references a NASDAQ chart [2], it does not provide concrete access information (link, DOI, repository, or formal citation for the dataset itself) for either the synthetic or real-world data used in the experiments.
Dataset Splits No The paper does not explicitly specify training, validation, or test dataset splits (e.g., exact percentages or sample counts). It mentions collecting samples but no detailed splitting methodology.
Hardware Specification No The paper mentions evaluating runtime with different numbers of cores (single, 5, 10 cores) but does not provide specific hardware details such as CPU/GPU models, processor types, or memory used for the experiments.
Software Dependencies No The paper describes algorithms and mathematical formulations but does not list any specific software dependencies or libraries with version numbers (e.g., Python 3.x, PyTorch 1.x) that were used for implementation.
Experiment Setup Yes In all of our simulations, the parameters t and λt are chosen directly from the data samples, i.e., without prior knowledge of the true solution, via Bayesian Inference Criterion (BIC) [31, 13]. and The regularization parameter γ in the objective function of (3) is set to 0.2. and for the choices of 0 = 3, λ0 = 0.16, and γ = 0.9