Asymptotic optimality of adaptive importance sampling

Authors: François Portier, Bernard Delyon

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical experiments shows that (i) w AIS accelerates significantly the convergence of AIS and (ii) small allocation policies (nt) (implying more frequent updates) give better results than large (nt).
Researcher Affiliation Academia Bernard Delyon IRMAR University of Rennes 1 bernard.delyon@univ-rennes1.fr François Portier Télécom Paris Tech University of Paris-Saclay francois.portier@gmail.com
Pseudocode Yes Algorithm 1 (AIS). Inputs: The number of stages T N , the allocation policy (nt)t=1,...T N , the sampler update procedure, the initial density q0. Set S0 = 0, N0 = 0. For t in 1, . . . T : (i) (Explore) Generate (xt,1, . . . xt,nt) from qt 1 (ii) (Exploit) (a) Update the estimate: St = St 1 + Pnt i=1 ϕ(xt,i) qt 1(xt,i) Nt = Nt 1 + nt It = N 1 t St (b) Update the sampler qt
Open Source Code Yes The code is made available at https://github.com/portierf/AIS.
Open Datasets No The paper uses a 'toy Gaussian example' as a synthetic dataset for numerical experiments, not a publicly available dataset with a formal citation or access link. 'The aim is to compute µ = R xφµ ,σ (x)dx where φµ,σ : Rd R is the probability density of N(µ, σ2Id), µ = (5, . . . 5)T Rd, σ = 1.'
Dataset Splits No The paper does not explicitly mention train/validation/test splits. It describes conducting numerical experiments and calculating MSE over '100 replicates', but not data splitting for model training and evaluation.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions that 'The code is made available at https://github.com/portierf/AIS', but it does not specify any software dependencies or their versions within the paper text.
Experiment Setup Yes We set NT = 1e5 and we consider d = 2, 4, 8, 16. The sampling policy is taken in the collection of multivariate Student distributions of degree ν = 3 denoted by {qµ,Σ0 : µ Rd} with Σ0 = σ0Id(ν 2)/ν and σ0 = 5. The initial sampling policy is set as µ0 = (0, . . . 0) Rd. The mean µt is updated at each stage t = 1, . . . T following the GMM approach as described in section 3. We compare the evolution of all the mentioned algorithms with respect to stages t = 1, . . . T = 50 with constant allocation policy nt = 2e3 (for AIS and w AIS).