Piecewise Linear Transformation – Propagating Aleatoric Uncertainty in Neural Networks

Authors: Thomas Krapf, Michael Hagn, Paul Miethaner, Alexander Schiller, Lucas Luttner, Bernd Heinrich

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Further, our experimental evaluation validates that PLT outperforms competing methods on publicly available real-world classification and regression datasets regarding exactness. Thus, the PDs propagated by PLT allow to assess the uncertainty of the provided decisions, offering valuable support.
Researcher Affiliation Academia Thomas Krapf, Michael Hagn, Paul Miethaner, Alexander Schiller, Lucas Luttner, Bernd Heinrich Faculty for Computer Science and Data Science, University of Regensburg {Thomas.Krapf, Michael.Hagn, Paul.Miethaner, Alexander.Schiller, Lucas.Luttner, Bernd.Heinrich}@ur.de
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No No statement about releasing their own source code or providing a link to it.
Open Datasets Yes We evaluate our method on a broad range of publicly available real-world datasets from various domains for both classification and regression tasks. Details about the datasets are provided in Table 2 (cf. Appendix E). We randomly split each dataset into training and test dataset and train a standard Re LU NN for classification or regression depending on the task associated to the dataset.
Dataset Splits No The paper mentions splitting data into 'training and test dataset' but does not explicitly mention a 'validation' split.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., CPU, GPU models, or cloud instances).
Software Dependencies No The paper mentions using 'MICE' and 'Gaussian kernel density estimation' methods, but does not provide specific software names with version numbers for reproducibility.
Experiment Setup No The paper states training a 'standard Re LU NN' and describes the process of inducing uncertainty and evaluation metrics, but does not provide specific hyperparameters (e.g., learning rate, batch size, epochs) or detailed training configurations for reproducibility.