Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sparse Estimation in Ising Model via Penalized Monte Carlo Methods
Authors: Blazej Miasojedow, Wojciech Rejchel
JMLR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficiency of the proposed method is also investigated via numerical studies. In this section we present efficiency of the proposed method via numerical studies. First, we compare our method to three algorithms, which we have mentioned previously, using simulated data sets. Next we apply our method to the real data example. |
| Researcher Affiliation | Academia | B la zej Miasojedow EMAIL Institute of Applied Mathematics and Mechanics University of Warsaw ul. Banacha 2, 02-097, Warszawa, Poland and Institute of Mathematics Polish Academy of Sciences ul. Sniadeckich 8, 00-656 Warszawa. Wojciech Rejchel EMAIL Faculty of Mathematics and Computer Science Nicolaus Copernicus University ul. Chopina 12/18, 87-100 Toru n, Poland and Institute of Applied Mathematics and Mechanics University of Warsaw ul. Banacha 2, 02-097, Warszawa, Poland |
| Pseudocode | Yes | Algorithm 1 MCMC Lasso for Ising model Let λ0 > λ1 > ... > λ100 and ψ = 0. for i = 1 to 100 do Simulate Y 1, . . . , Y m using a Gibbs sampler with the stationary distribution p(y|ψ). Run the FISTA algorithm to compute ˆθi as arg min θ {ℓm n (θ) + λi|θ|1} . Set ψ = ˆθi. end for Next, set ˆθ = ˆθi , where i = arg min 1 i 100 n nℓm n (ˆθi) + log(n) ˆθi 0 o and θ 0 denotes the number of non-zero elements of θ. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It doesn't mention releasing code, nor does it provide a link to a repository or state that code is available in supplementary materials. |
| Open Datasets | Yes | We apply our method to CAL500 dataset (Turnbull et al., 2008). |
| Dataset Splits | No | For each configuration of the model, the number of vertices d and the number of observation n we sample 100 replications of data sets. This describes data generation for multiple runs, not specific train/test/validation splits for a single dataset. For the CAL500 dataset, no splits are explicitly mentioned. |
| Hardware Specification | Yes | For instance, for a single data set and n = 40 computing the estimator on 3.4 GHZ CPU takes about 15 seconds for d = 190, 60 seconds for d = 1225 and 5 minutes for d = 4950. |
| Software Dependencies | No | In the current paper we use the FISTA algorithm with backtracking from Beck and Teboulle (2009). This mentions a method/algorithm but not a specific software library or its version number. No other software dependencies with version numbers are listed. |
| Experiment Setup | Yes | Let λ0 > λ1 > ... > λ100 and ψ = 0. for i = 1 to 100 do Simulate Y 1, . . . , Y m using a Gibbs sampler with the stationary distribution p(y|ψ). Run the FISTA algorithm to compute ˆθi as arg min θ {ℓm n (θ) + λi|θ|1} . Set ψ = ˆθi. end for Next, set ˆθ = ˆθi , where i = arg min 1 i 100 n nℓm n (ˆθi) + log(n) ˆθi 0 o and θ 0 denotes the number of non-zero elements of θ. In Algorithm 1 we use 100 values of λ uniformly spaced on the log scale, starting from the largest λ, which corresponds to the empty model. We use m = 103d iteration of the Gibbs sampler. To compute ℓm n (ˆθi) for i = 1, . . . , 100 in BIC we generate one more sample of the size m = 104d using the Gibbs sampler with the stationary distribution p( |ˆθ50). |