Vector Quantized Bayesian Neural Network Inference for Data Streams

Authors: Namuk Park, Taekyu Lee, Songkuk Kim9322-9330

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs while estimating predictive results comparable to or superior to the results of BNNs. This section evaluates the performance of VQ-BNN in three sets of experiments.
Researcher Affiliation Academia Namuk Park, Taekyu Lee, Songkuk Kim Yonsei University {namuk.park, taekyulee, songkuk}@yonsei.ac.kr
Pseudocode No The paper describes the VQ-BNN inference process using equations and descriptive text, but no explicitly labeled pseudocode or algorithm block is present.
Open Source Code No The paper does not provide an explicit statement about releasing the source code for the described methodology or a link to a code repository.
Open Datasets Yes We use the Cam Vid dataset (Brostow, Fauqueur, and Cipolla 2009)... We use the NYUDv2 dataset (Nathan Silberman and Fergus 2012)...
Dataset Splits No The paper mentions using datasets like CamVid and NYUDv2 but does not explicitly provide specific train/validation/test split percentages, sample counts, or refer to a formally defined split with a citation in the main text.
Hardware Specification No The paper does not explicitly describe the specific hardware used for experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper does not provide specific software dependency details with version numbers, such as programming language versions or library versions (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes In all experiments, we use the same hyperparameters, K = 5 and τ = 1.25... BNNs with MC dropout (Gal and Ghahramani 2016) layers predict results with 30 forward passes in Section 3.2 and Section 3.3 so that the negative log-likelihood (NLL) converge.