Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

De-Sequentialized Monte Carlo: a parallel-in-time particle smoother

Authors: Adrien Corenflos, Nicolas Chopin, Simo Särkkä

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 5, we experimentally demonstrate the statistical and computational properties of our method on a suite of examples. The article concludes with a discussion of the limitations and possible improvements of the de-Sequentialized Monte Carlo method.
Researcher Affiliation Academia Adrien Corenflos EMAIL Department of Electrical Engineering and Automation, Aalto University Nicolas Chopin EMAIL ENSAE, Institut Polytechnique de Paris Simo S arkk a EMAIL Department of Electrical Engineering and Automation, Aalto University
Pseudocode Yes Algorithm 1: Block combination ... Algorithm 2: Smoother initialization ... Algorithm 3: Recursion ... Algorithm 4: Conditional Block combination ... Algorithm 5: PIT linearized proposal smoother
Open Source Code Yes All the results were obtained using an Nvidia Ge Force RTX 3090 GPU with 24GB memory and the code to reproduce them can be found at https://github.com/Adrien Corenflos/parallel-ps.
Open Datasets No The paper refers to models from existing literature and states that datasets were 'generated from the model' (Section 5.1) or mentions 'the same prior and data (nutria, T + 1 = 120) as in these references' (Section 5.2), but does not provide explicit links, DOIs, or repositories for public access to the specific datasets used for their experiments.
Dataset Splits No The paper describes generating 50 datasets for different T values (T = 32, 64, 128, 256, 512) and repeating experiments 100 times on each generated dataset in Section 5.1. In Section 5.2, it refers to using 'data (nutria, T + 1 = 120) as in these references'. However, it does not specify any training/test/validation splits for these datasets.
Hardware Specification Yes All the results were obtained using an Nvidia Ge Force RTX 3090 GPU with 24GB memory and the code to reproduce them can be found at https://github.com/Adrien Corenflos/parallel-ps.
Software Dependencies No The paper mentions running experiments on a GPU and discusses parallelization, but it does not specify any software libraries, frameworks, or their version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes In Section 5.1, the experiments involved generating '50 datasets x0:T , y0:T from the model for T = 32, 64, 128, 256, 512 and repeat 100 d SMC and FFBS smoothing experiments on each dataset generated'. It also specifies 'N = 25 N = 50 N = 100 N = 250 N = 500 N = 1000' particles. Section 5.2 mentions 'N = 50 particles' and '25 iterations' for EKS. Section 5.3 states 'we take σ to be in {0.3, 0.4, 0.5}'.