Vector-Valued Control Variates
Authors: Zhuo Sun, Alessandro Barp, Francois-Xavier Briol
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our methodology on a range of problems including multifidelity modelling, Bayesian inference for dynamical systems, and model evidence computation through thermodynamic integration. |
| Researcher Affiliation | Academia | 1University College London, London, UK 2University of Cambridge, Cambridge, UK 3The Alan Turing Institute, London, UK. |
| Pseudocode | Yes | Algorithm 1 Block-coordinate descent for vv-CVs with unknown task relationship |
| Open Source Code | Yes | The code to reproduce our results is available at: https://github.com/jz-fun/ Vector-valued-Control-Variates-Code. |
| Open Datasets | Yes | We use the dataset of snowshoe hares (preys) and Canadian lynxes (predators) from Hewitt (1921), and implement Bayesian inference on model parameters x by using no U-turn sampler (NUTS) in Stan (Carpenter et al., 2017). |
| Dataset Splits | No | The paper focuses on Monte Carlo methods and discusses sample sizes for integration tasks (e.g., 'm = (m L, m H) = (40, 40)'), but it does not specify explicit train/validation/test dataset splits with percentages or absolute counts for the input data itself. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100), CPU models (e.g., Intel Xeon), or cloud compute specifications used for running experiments. |
| Software Dependencies | No | The paper mentions using 'Stan (Carpenter et al., 2017)' and the 'Adam optimiser (Kingma & Ba, 2015)' but does not provide specific version numbers for these or any other software dependencies crucial for replication. |
| Experiment Setup | Yes | Hyper-parameter tuning: batch size 5; learning rate 0.05; total number of epochs 30. Base kernel: squared exponential kernel Optimisation: λ = 0.001; batch size is 5; learning rate is 0.001; total number of epochs 400. |