Predictive Approximate Bayesian Computation via Saddle Points

Authors: Yingxiang Yang, Bo Dai, Negar Kiyavash, Niao He

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical Experiment We test the performance of P-ABC and compare the result with K2and DR-ABC as representatives from samplingand regression-based ABC algorithms. Table 1: MSE for estimating the model parameter with different dimensions using K2-, DRand P-ABC. Figure 3: Statistics of MSEs for P-, K2and DRABC trained on 1000 sequences of length 30.
Researcher Affiliation Collaboration Yingxiang Yang Bo Dai Negar Kiyavash Niao He {yyang172,kiyavash,niaohe} @illinois.edu bohr.dai@gmail.com Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. Department of Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign. Google Brain.
Pseudocode Yes Algorithm 1 Predictive ABC (P-ABC)
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes generating synthetic datasets rather than using publicly available ones with explicit access information. For the ecological dynamic system, it cites a previous study ('Park et al. [2016] for example') but does not provide specific access details (link, DOI, or repository) for the dataset itself.
Dataset Splits No The paper mentions training and test sets and their respective MSEs but does not provide explicit dataset split percentages, counts for validation sets, or citations to predefined splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions the use of neural networks and LSTM cells but does not specify any software libraries or their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Each neural network contains two fully connected layers of size 8 with exponential linear unit (ELU) activation functions, and the final output layer for f is activated using the hyperbolic tangent. We choose ξ R and p0(ξ) 1{ξ [ 1, 1]}, and use a learning rate of 10 4. In 2E5 iterations, P-ABC achieves 0.0413 mean square error (MSE) on the training set and 0.0416 MSE on the test set. For P-ABC, we set ξ R4, the size of thee LSTM cells to be 32, and the size of the fully connected layer to be 16.