A Quadrature Rule combining Control Variates and Adaptive Importance Sampling
Authors: Rémi Leluc, François Portier, Johan Segers, Aigerim Zhuman
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The good behavior of the method is illustrated empirically on synthetic examples and real-world data for Bayesian linear regression. To compare the finite-sample performance of the AIS and AISCV estimators, we first present in Section 6.1 synthetic data examples involving the integration problem over the unit cube [0, 1]d and then with respect to some Gaussian mixtures as in [4]. The goal is to compute R gf dλ for vectors of integrands g : Rd Rp. We consider various dimensions d > 1 and several choices for the number of control variates m. Section 6.2 deals with real-world datasets in the context of Bayesian inference. |
| Researcher Affiliation | Academia | Rémi Leluc LTCI, Télécom Paris Institut Polytechnique de Paris, France remi.leluc@telecom-paris.fr, François Portier CREST ENSAI, France francois.portier@gmail.com, Aigerim Zhuman LIDAM, ISBA UCLouvain, Belgium aigerim.zhuman@uclouvain.be, Johan Segers LIDAM, ISBA UCLouvain, Belgium johan.segers@uclouvain.be |
| Pseudocode | Yes | Algorithm 1 Adaptive Importance Sampling with Control Variates (AISCV) and Algorithm 2 Quadrature Rule AISCV post-hoc scheme |
| Open Source Code | Yes | For ease of reproducibility, the code, numerical details and additional results are available in the supplementary material. |
| Open Datasets | Yes | Classical datasets from [11] are considered : housing (N = 506; d = 13; m {12; 104}); abalone (N = 4177; d = 8; m {7; 44}); red wine (N = 1599; d = 11; m {10; 77}); and white wine (N = 4898; d = 11; m {10; 77}). |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., train/validation/test percentages or counts) as commonly understood for model training and evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments. |
| Software Dependencies | No | The paper mentions software like 'Tensorflow', 'Py Torch', and 'Pymc' but does not specify their version numbers or any other software dependencies with version details. |
| Experiment Setup | Yes | In all simulations, the sampling policy is taken within the family of multivariate Student t distributions of degree ν denoted by {qµ,Σ0 : µ Rd} with Σ0 = σ0Id(ν 2)/ν and ν > 2, σ0 > 0. ... The allocation policy is fixed to nt = 1000 and the number of stages is T {5; 10; 20; 30; 50}. The policy parameters are µ0 = (0.5, . . . , 0.5) Rd, ν = 8, and σ0 = 0.1. |