Simulation-Based Inference with Quantile Regression

Authors: He Jia

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that NQE achieves state-of-the-art performance on a variety of benchmark problems.In Section 3, we demonstrate that NQE attains state-of-the-art performance across a variety of benchmark problems, together with a realistic application to high dimensional cosmology data.3. Numerical Experiments
Researcher Affiliation Academia He Jia (贾赫) 1 1Department of Astrophysical Sciences, Princeton University, USA. Correspondence to: He Jia <hejia@princeton.edu>.
Pseudocode Yes Algorithm 1 Neural Quantile Estimation (NQE)
Open Source Code Yes The results in this paper can be reproduced with the publicly available NQE package 1 based on pytorch (Paszke et al., 2019). 1https://github.com/h3jia/nqe.
Open Datasets Yes We assess the performance of NQE on six benchmark problems, with detailed specifications provided in Appendix C. All results for methods other than NQE are adopted from Lueckmann et al. (2021). ... We use the following problems from Lueckmann et al. (2021) to benchmark the performance of the SBI methods. The ground truth posterior samples are available for all the problems.
Dataset Splits Yes We use 80% simulations for training, 10% for validation, and 10% for test.
Hardware Specification Yes We train all the models on NVIDIA A100 MIG GPUs using the Adam W optimizer (Loshchilov & Hutter, 2017), and find the wall time of NQE training to be comparable to existing methods like NPE. The work presented in this article was performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSci E) and the Office of Information Technology s High Performance Computing Center and Visualization Laboratory at Princeton University.
Software Dependencies No The paper mentions “pytorch (Paszke et al., 2019)” and “Cython (Behnel et al., 2010)” but does not specify their exact version numbers. It also mentions the “Adam W optimizer” but this is an algorithm, not a software library with a version number.
Experiment Setup Yes Table 2. Our baseline choice of NQE hyperparameters. [lists values for ptl, p0, f0, f1, f2, λreg, # of MLP hidden layers, # of MLP hidden neurons per layer, nbin]. We train all the models on NVIDIA A100 MIG GPUs using the Adam W optimizer (Loshchilov & Hutter, 2017), and find the wall time of NQE training to be comparable to existing methods like NPE. ... We reduce the stepsize by 10% after every 5 epochs, and terminate the training if the loss does not improve after 30 epochs or when the training reaches 300 epochs.