Active Sequential Posterior Estimation for Sample-Efficient Simulation-Based Inference
Authors: Sam Griesemer, Defu Cao, Zijun Cui, Carolina Osorio, Yan Liu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further demonstrate the effectiveness of the proposed method in the travel demand calibration setting, a high-dimensional inverse problem commonly requiring computationally expensive traffic simulators. Our method outperforms well-tuned benchmarks and state-of-the-art posterior estimation methods on a large-scale real-world traffic network, as well as demonstrates a performance advantage over non-active counterparts on a suite of SBI benchmark environments. |
| Researcher Affiliation | Collaboration | Sam Griesemer1 Defu Cao1 Zijun Cui1,2 Carolina Osorio3,4 Yan Liu1 1USC 2MSU 3Google Research 4HEC Montréal |
| Pseudocode | Yes | Algorithm 1 Active Sequential Neural Posterior Estimation (ASNPE) |
| Open Source Code | Yes | Available at https://github.com/samgriesemer/seqinf. |
| Open Datasets | Yes | We conducted a case study on the large-scale regional Munich network seen in [37]. |
| Dataset Splits | No | The paper does not explicitly define training, validation, and test splits (e.g., 70/15/15%) for a fixed dataset in the conventional machine learning sense, instead focusing on sequential data collection and model updates. |
| Hardware Specification | Yes | To run experiments, we employed our own hardward locally, which is an linux-based machine running an Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz 64GB memory, and NVIDIA Ge Force RTX 2080 Ti. |
| Software Dependencies | No | The paper mentions 'Python 3.11', and the use of 'sbi[44]' and 'sbibm[28] Python packages', as well as 'SUMO [26]'. However, it does not provide specific version numbers for all of these key software components (e.g., sbi, sbibm, or underlying deep learning frameworks like PyTorch/TensorFlow if used by the NDE), which are crucial for full reproducibility. |
| Experiment Setup | Yes | In the SNPE loop: total number of rounds R (4 in reported experiments), round-wise sample size N (between 256-512), round-wise selection size B (32 in reported experiments). Neural Density Estimator (NDE) model: our model architecture (used for both SNPE and ASNPE) is a masked autoregressive flow with 5 transform layers, each with masked feedforward blocks containing 50 hidden units, and trained with a (consistent) MC-dropout setting of 0.25. When collecting distributional estimates as described in Eq 4, we used 100 weight samples ϕ p(ϕ|D) (as generally recommended in [21]). |