Kalman Bayesian Neural Networks for Closed-Form Online Learning

Authors: Philipp Wagner, Xinyang Wu, Marco F. Huber

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we validate the proposed KBNN in both classification and regression tasks on benchmark datasets. Four experiments are conducted: (i) Evaluating the KBNN on a synthetic regression task, (ii) binary classification on the well-known Moon dataset, (iii) online learning on the Moon dataset, and (iv) comparison with other approximate inference approaches on nine UCI regression datasets (Dua and Graff 2017). The KBNN implementation merely requires matrix operations and is realized in Py Torch. The performance of the methods is assessed by means of the root mean square error (RMSE) for regression tasks, the accuracy for classification tasks, the negative log-likelihood (NLL) for quantifying the uncertainty, and the training time.
Researcher Affiliation Collaboration 1Department Cyber Cognitive Intelligence (CCI), Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Stuttgart, Germany 2Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany
Pseudocode Yes Algorithm 1: Backward pass for training on dataset D
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper. There is no specific repository link, explicit code release statement, or mention of code in supplementary materials.
Open Datasets Yes comparison with other approximate inference approaches on nine UCI regression datasets (Dua and Graff 2017).
Dataset Splits No The paper states, 'The datasets are split into random train and test sets with 90% and 10% of the data, respectively.' However, it does not explicitly provide details for a separate validation split or how it's handled for reproducibility.
Hardware Specification Yes A PC with Intel i7-8850H CPU, 16GB RAM but without GPU is used.
Software Dependencies No The paper mentions 'Py Torch' as the implementation framework but does not provide specific version numbers for PyTorch or any other software dependencies required to replicate the experiment.
Experiment Setup Yes We use a standard MLP with one hidden layer and 100 hidden neurons, and Re LU activation for the hidden layer. The output activation is linear. We compare KBNN with PBP (Hern andez-Lobato and Adams 2015) and Monte Carlo (MC) Dropout (Gal and Ghahramani 2016). For both PBP and MC Dropout we use the implementations of the authors. For MC Dropout we use dropout probability 0.1, same as the authors used for regression tasks (Gal and Ghahramani 2016). All methods merely use one epoch for training in order to simulate an online learning scenario.