Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Scaling Up Bayesian Neural Networks with Neural Networks

Authors: Zahra Moslemi, Yang Meng, Shiwei Lan, Babak Shahbaba

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using simulated and real data, we demonstrate that our proposed method improves computational efficiency of BNN, while maintaining similar performance in terms of prediction accuracy and uncertainty quantification. We demonstrate the effectiveness of our method on eleven synthetic and real-world datasets, comparing it against a comprehensive selection of baseline approaches. To thoroughly assess the performance and effectiveness of each method, we use a range of key metrics. These include Mean Squared Error (MSE) for regression tasks (Figure 2) and Accuracy for classification tasks (Figure 3).
Researcher Affiliation Academia Zahra Moslemi EMAIL Department of Statistics University of California Irvine, CA, USA Yang Meng EMAIL Department of Statistics University of California Irvine, CA, USA Shiwei Lan EMAIL School of Mathematical and Statistical Sciences Arizona State University Tempe, AZ, USA Babak Shahbaba EMAIL Department of Statistics University of California Irvine, CA, USA
Pseudocode Yes Algorithm 1 Preconditioned Crank-Nicolson (p CN) Algorithm Algorithm 2 Variational Inference in Bayesian Neural Networks (BNNs) Algorithm 3 Fast Bayesian Neural Network (FBNN)
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes To this end, we utilize the make_regression function from the sklearn.datasets package to generate a dataset... we use the Wine Quality data (Cortez et al., 2009). The Boston housing dataset was collected in 1978 (Harrison Jr & Rubinfeld, 1978). Next, we analyze the data from the National Alzheimer s Coordinating Center (NACC)... (Beekly et al., 2004)... (Beekly et al., 2007). For this data, the goal is to predict the release year of a song from audio features... (Bertin-Mahieux, 2011). Next, we use the Adult dataset (Becker & Kohavi, 1996)... The MNIST dataset is commonly used as a benchmark dataset for the hand-written digit classification task (Deng, 2012). Celeb A (Liu et al., 2015) is an image dataset... The Street View House Numbers (SVHN) dataset (Netzer et al., 2011).
Dataset Splits No The paper does not explicitly provide specific details about how datasets were split into training, validation, and test sets (e.g., percentages, exact counts, or specific predefined splits with citations) for reproduction. It generally mentions 'Training set {(Xn, Yn)}N n=1' but lacks the specific details for the various datasets used.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory, or cloud instance specifications) used to run the experiments.
Software Dependencies No The paper mentions using functions from 'sklearn.datasets package' but does not specify its version or the versions of any other key software libraries, frameworks, or programming languages used for the implementation.
Experiment Setup Yes Table 2: Description of various datasets and their corresponding DNN Emulator architectures (Droupout layers have been used on input layer and first hidden layer) Task Dataset #Hidden Layers # Neurons per Layer Activation Functions #Epochs Dropout Rate Regression Boston Housing 2 3,32 Re LU 1000 0.7 Wine Quality 2 3,32 Re LU 1000 0.5 Alzheimer 2 4,64 Re LU 1000 0.5 Year Prediction 2 4,64 Re LU 1000 0.5 Simulation 3 8,64,32 Re LU 1000 0.5 Classification Adult 2 4,32 Re LU 1000 0.5 Mnist 2 3,64 Re LU 1000 0.5 Alzheimer 2 4,64 Re LU 1000 0.5 celeb A 2 3,32 Re LU 1000 0.5 SVHN 3 4,64,32 Re LU 1000 0.7 Simulation 3 8,64,32 Re LU 1000 0.5