PI3NN: Out-of-distribution-aware Prediction Intervals from Three Neural Networks

Authors: Siyan Liu, Pei Zhang, Dan Lu, Guannan Zhang

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Benchmark and real-world experiments show that our method outperforms several state-of-the-art approaches with respect to predictive uncertainty quality, robustness, and OOD samples identification.
Researcher Affiliation Academia Siyan Liu, Pei Zhang & Dan Lu Computational Sciences and Engineering Division Oak Ridge National Laboratory 1 Bethel Valley Rock, Oak Ridge, TN 37830, USA {lius1, zhangp1, lud1}@ornl.gov Guannan Zhang Computer Science and Mathematics Division Oak Ridge National Laboratory 1 Bethel Valley Rock, Oak Ridge, TN 37830, USA {zhangg}@ornl.gov
Pseudocode No The paper describes the main algorithm in Section 3.2 through numbered steps in prose, but does not present it as structured pseudocode or a clearly labeled algorithm block.
Open Source Code Yes The code of PI3NN is avaliable at https://github.com/liusiyan/PI3NN.
Open Datasets Yes We first evaluate PI3NN to calculate 95% PIs on nine UCI datasets [26]. ... The flight delay data (www.kaggle.com/vikalpdongre/us-flights-data-2008) contains the flight information in the U.S. in the year 2008.
Dataset Splits Yes Additional 20% validation data is split from the training data. ... We pre-split the each data set with fixed splitting random seed, and generated 3 train/test (90%/10%) data pairs used for all methods, so as to ensure a fair comparison.
Hardware Specification No No specific hardware details (like GPU models, CPU types, or memory specifications) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'Re LU NN' but does not provide specific version numbers for any libraries, frameworks, or programming languages (e.g., 'Python 3.x', 'PyTorch 1.x').
Experiment Setup Yes For PI3NN method, we use the hyper-parameters for all experiments: learning rate (0.01), Adam optimizer with MSE loss for all three networks. ... Maximum epochs is 3,000.