Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Particle Gibbs with Ancestor Sampling

Authors: Fredrik Lindsten, Michael I. Jordan, Thomas B. Schön

JMLR 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we illustrate the properties of PGAS in a simulation study. First, in Section 7.1 we consider a stochastic volatility SSM and investigate the improvement in mixing offered by AS when PGAS is compared with PG. ... We analyze the Standard and Poor s (S&P) 500 data from 3/April/2006 to 31/March/2014, consisting of T = 2 011 observations. ... Finally, in Section 7.3 we use a similar reformulation and apply PGAS for identification of an epidemiological model for which the state transition kernel is not available.
Researcher Affiliation Academia Fredrik Lindsten EMAIL Department of Engineering University of Cambridge Cambridge, CB2 1PZ, UK, and Division of Automatic Control Link oping University Link oping, 581 83, Sweden Michael I. Jordan EMAIL Computer Science Division and Department of Statistics University of California Berkeley, CA 94720, USA Thomas B. Sch on EMAIL Department of Information Technology Uppsala University Uppsala, 751 05, Sweden
Pseudocode Yes Algorithm 1 Sequential Monte Carlo (each step is for i = 1, . . . , N) ... Algorithm 2 PGAS Markov kernel Input: Reference trajectory x 1:T XT and parameter θ Θ. Output: Sample x 1:T P N θ (x 1:T , ) from the PGAS Markov kernel. ... Algorithm 3 PGAS Markov kernel for the joint smoothing distribution pθ(x1:T | y1:T ) ... Algorithm 4 PGAS for Bayesian learning of SSMs ... Algorithm 5 PGAS for frequentist learning of SSMs
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper uses the S&P 500 data and an epidemiological model's data. While the S&P 500 data is generally public, the paper only mentions it was 'acquired from the Yahoo Finance web page' without a specific, permanent link or formal citation. The epidemiological data is generated for the study, and no access information is provided.
Dataset Splits No The paper analyzes the S&P 500 data consisting of T = 2 011 observations and later a small subset of T = 102 samples. For the epidemiological model, it mentions 'eight years of data with weekly observations' and 'The first half of the data batch is used for estimation of the model parameters'. While it mentions a split for estimation, it does not provide specific percentages, counts, or methodology that would allow precise reproduction of the splits for either dataset.
Hardware Specification No The paper mentions running simulations and algorithms, but it does not provide any specific hardware details such as CPU models, GPU models, or memory specifications.
Software Dependencies No The paper mentions 'Matlab' in the context of computational overhead and using the 'drss' function, but it does not specify any version numbers for Matlab or any other software dependencies used for the experiments.
Experiment Setup No The paper describes simulation setups, such as running PGAS and PGBS with N = 5 particles for 10,000 iterations (with 1,000 discarded as burn-in) and using specific priors for parameters. However, it lacks a dedicated section for detailed experimental setup or a comprehensive list of hyperparameters and configurations for all experiments. For example, for PMMH, it states 'we tune the covariance matrix of the random walk proposal distribution according to the posterior distribution obtained from an initial trial run' without providing the final tuned parameters or the tuning process details.