Learning the piece-wise constant graph structure of a varying Ising model

Authors: Batiste Le Bars, Pierre Humbert, Argyris Kalogeratos, Nicolas Vayatis

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, experimental results on several synthetic datasets and a real-world dataset demonstrate the performance of our method.
Researcher Affiliation Academia 1Universit e Paris-Saclay, ENS Paris-Saclay, CNRS, Centre Borelli, F-91190 Gif-sur-Yvette, France.
Pseudocode No The paper describes the optimization program and methodology using mathematical formulations and descriptive text, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The code of TVI-FL is available online1 and provided in the supplementary material, so as a Jupyter Notebook reproducing results and figures of the real-world example.
Open Datasets Yes We analyze the different votes of the Illinois House of Representatives during the period of the 114-th and 115-th US Congresses (2015-2019), which are available at voteview.com (Lewis et al., 2020). To generate a piece-wise constant Ising model: We first pick a degree value d {2, 3, 4} and draw independently 3 random d-regular graphs, one for each submodel... For each submodel, we draw observations using Gibbs sampling with a burn-in period of 1000 samples. Moreover, we collect one observation every 20 samples (lag) to avoid dependencies between them.
Dataset Splits Yes The second technique, based on cross-validation (CV), assumes that more than one sample is observed at each moment in time i = 1, ..., n. Besides, to be able to perform CV, we also sample 5 more observations per timestamp and use them only in the testing phase.
Hardware Specification No All the experiments were implemented using Python and conducted on a personal laptop. This statement is too general and does not provide specific details on the CPU, GPU, or memory of the laptop.
Software Dependencies No All the experiments were implemented using Python... In this work, we use the python package CVXPY (Diamond & Boyd, 2016)... The paper mentions Python and CVXPY but does not provide specific version numbers for either.
Experiment Setup Yes We first pick a degree value d {2, 3, 4} and draw independently 3 random d-regular graphs... For each submodel, we draw observations using Gibbs sampling with a burn-in period of 1000 samples. Moreover, we collect one observation every 20 samples (lag) to avoid dependencies between them. For each experiment, we use a random-search strategy to find the best pair of hyperparameters (λ1, λ2) in [4, 15] [30, 40]. The first is the Akaike Information Criterion (AIC) that computes the average of the following quantity for all nodes: AIC(bβa) 2La(bβa) + 2 Dim(bβa).