A connection between Tempering and Entropic Mirror Descent

Authors: Nicolas Chopin, Francesca Crucinio, Anna Korba

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted a numerical experiment to study the possible behaviors of the tempering sequences found by such adaptive strategies when using SMC samplers. Our numerical results are obtained using waste-free SMC (Dau and Chopin, 2022), as there is evidence that it improves on standard SMC, which in turns outperforms annealed importance sampling (Jasra et al., 2011).
Researcher Affiliation Academia Nicolas Chopin 1 Francesca Crucinio 1 Anna Korba 1 1ENSAE, CREST, Institut Polytechnique de Paris. Correspondence to: Nicolas Chopin <Nicolas.Chopin@ensae.fr>, Francesca Crucinio <francesca_romana.crucinio@kcl.ac.uk>, Anna Korba <anna.korba@ensae.fr>.
Pseudocode Yes Algorithm 1 SMC samplers (Del Moral et al., 2006). Algorithm 2 Particle Mirror Descent (PMD; Dai et al. (2016)). Algorithm 3 Safe and Regularized Adaptive Importance sampling (SRAIS; Korba and Portier (2022))
Open Source Code Yes The code is available at https://github.com/Francesca Crucinio/Mirror Descent Tempering.
Open Datasets No We conducted a numerical experiment to study the possible behaviors of the tempering sequences found by such adaptive strategies when using SMC samplers. ... µ0 = Nd(0d, Id), π = Nd(m, Σ), m = 1d, and various choices for Σ
Dataset Splits No We use throughout the adaptive tempering SMC sampler as implemented in the package particles, with all its settings set to defaults; see https://github. com/nchopin/particles: the Markov kernels are random-walk Metropolis kernels, , automatically calibrated on the current particle sample; the next tempering exponent is chosen so that ESSn = N/2, and N = 104 and d = 25.
Hardware Specification No We use throughout the adaptive tempering SMC sampler as implemented in the package particles, with all its settings set to defaults; see https://github. com/nchopin/particles: the Markov kernels are random-walk Metropolis kernels, , automatically calibrated on the current particle sample; the next tempering exponent is chosen so that ESSn = N/2, and N = 104 and d = 25.
Software Dependencies No Our numerical results are obtained using waste-free SMC (Dau and Chopin, 2022), as there is evidence that it improves on standard SMC, which in turns outperforms annealed importance sampling (Jasra et al., 2011). We use throughout the adaptive tempering SMC sampler as implemented in the package particles, with all its settings set to defaults; see https://github. com/nchopin/particles
Experiment Setup Yes We use throughout the adaptive tempering SMC sampler as implemented in the package particles, with all its settings set to defaults; see https://github. com/nchopin/particles: the Markov kernels are random-walk Metropolis kernels, , automatically calibrated on the current particle sample; the next tempering exponent is chosen so that ESSn = N/2, and N = 104 and d = 25. To place all algorithms on equal footing we use the same number of particles N = 104 and the same Markov kernels, i.e. random-walk Metropolis kernels automatically calibrated on the current particle sample. In the case of SMC, we select the next tempering sequence so that ESSn = N/2, or, equivalently, by setting β = 1 in (14). For the constant rate AIS of (Goshtasbpour et al., 2023), we follow their recommendation and set δ = 1/32 (higher values of δ give slightly shorter tempering sequences but considerably worse approximations of π).