Justification-Based Reliability in Machine Learning

Authors: Nurali Virani, Naresh Iyer, Zhaoyuan Yang6078-6085

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments conducted on simulated and real datasets, we demonstrate that our approach can provide reliability for individual predictions and characterize regions where such reliability cannot be ascertained.
Researcher Affiliation Industry Nurali Virani, Naresh Iyer, Zhaoyuan Yang GE Research 1Research Circle, Niskayuna, NY 12309 {nurali.virani, iyerna, zhaoyuan.yang}@ge.com
Pseudocode Yes Algorithm 1 Training Epistemic Classifiers. and Algorithm 2 Inference with Epistemic Classifiers.
Open Source Code No The paper does not provide an explicit statement about releasing its own source code or a link to a repository for the implementation of the described methodology.
Open Datasets Yes We then conducted experiments using Grid Stability (Arzamasov, B ohm, and Jochem 2018) and Iris dataset from UCI Repository (Dua and Graff 2017), Italy Power Demand classification (Keogh et al. 2006) and Synthetic Control (Syn Con) dataset (Alcock, Manolopoulos, and others 1999) from UCR Time-series Repository (Chen et al. 2015), MNIST image dataset (Le Cun 1998), and German Traffic Sign Recognition Benchmark (GTSRB) dataset (Stallkamp et al. 2012)
Dataset Splits No The paper mentions using a validation set but does not provide specific details on its size, proportion, or how it's created (e.g., 80/10/10 split): "The parameter optimization is conducted by evaluating metrics over the validation set (Xv, Y v)."
Hardware Specification No The paper does not provide specific details on the hardware used for experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions software like scikit-learn and Adversarial Robustness Toolbox but does not provide specific version numbers for them or any other software dependencies: "In our implementation, we use ball-tree method from scikit-learn (Pedregosa et al. 2011) to construct the neighbor search tree."
Experiment Setup No The paper discusses the parameters for its Epistemic Classifier (e.g., ε, k) and the type of neural networks used, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) for the training of these neural networks.